00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 425 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3087 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.068 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.069 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.111 Fetching changes from the remote Git repository 00:00:00.112 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.325 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.339 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.351 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:05.351 > git config core.sparsecheckout # timeout=10 00:00:05.363 > git read-tree -mu HEAD # timeout=10 00:00:05.379 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:05.398 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:05.398 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.504 [Pipeline] Start of Pipeline 00:00:05.519 [Pipeline] library 00:00:05.521 Loading library shm_lib@master 00:00:05.521 Library shm_lib@master is cached. Copying from home. 00:00:05.542 [Pipeline] node 00:00:05.557 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.558 [Pipeline] { 00:00:05.568 [Pipeline] catchError 00:00:05.569 [Pipeline] { 00:00:05.585 [Pipeline] wrap 00:00:05.596 [Pipeline] { 00:00:05.602 [Pipeline] stage 00:00:05.604 [Pipeline] { (Prologue) 00:00:05.769 [Pipeline] sh 00:00:06.061 + logger -p user.info -t JENKINS-CI 00:00:06.078 [Pipeline] echo 00:00:06.079 Node: CYP9 00:00:06.087 [Pipeline] sh 00:00:06.418 [Pipeline] setCustomBuildProperty 00:00:06.429 [Pipeline] echo 00:00:06.431 Cleanup processes 00:00:06.436 [Pipeline] sh 00:00:06.722 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.722 2449116 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.736 [Pipeline] sh 00:00:07.089 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.089 ++ grep -v 'sudo pgrep' 00:00:07.089 ++ awk '{print $1}' 00:00:07.089 + sudo kill -9 00:00:07.089 + true 00:00:07.103 [Pipeline] cleanWs 00:00:07.113 [WS-CLEANUP] Deleting project workspace... 00:00:07.114 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.120 [WS-CLEANUP] done 00:00:07.124 [Pipeline] setCustomBuildProperty 00:00:07.138 [Pipeline] sh 00:00:07.427 + sudo git config --global --replace-all safe.directory '*' 00:00:07.498 [Pipeline] nodesByLabel 00:00:07.499 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.509 [Pipeline] httpRequest 00:00:07.514 HttpMethod: GET 00:00:07.514 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.520 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:07.523 Response Code: HTTP/1.1 200 OK 00:00:07.524 Success: Status code 200 is in the accepted range: 200,404 00:00:07.524 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.381 [Pipeline] sh 00:00:08.672 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.695 [Pipeline] httpRequest 00:00:08.700 HttpMethod: GET 00:00:08.701 URL: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:08.701 Sending request to url: http://10.211.164.101/packages/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:08.713 Response Code: HTTP/1.1 200 OK 00:00:08.714 Success: Status code 200 is in the accepted range: 200,404 00:00:08.714 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:22.118 [Pipeline] sh 00:00:22.411 + tar --no-same-owner -xf spdk_4506c0c368f63ba9b9b013ecff216cef6ee8d0a4.tar.gz 00:00:24.985 [Pipeline] sh 00:00:25.274 + git -C spdk log --oneline -n5 00:00:25.274 4506c0c36 test/common: Enable inherit_errexit 00:00:25.274 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:00:25.274 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:25.274 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:00:25.274 b22f1b34d test/scheduler: Enhance lookup of the $old_cgroup in move_proc() 00:00:25.297 [Pipeline] withCredentials 00:00:25.310 > git --version # timeout=10 00:00:25.324 > git --version # 'git version 2.39.2' 00:00:25.347 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:25.349 [Pipeline] { 00:00:25.360 [Pipeline] retry 00:00:25.362 [Pipeline] { 00:00:25.380 [Pipeline] sh 00:00:25.673 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:25.950 [Pipeline] } 00:00:25.972 [Pipeline] // retry 00:00:25.977 [Pipeline] } 00:00:26.003 [Pipeline] // withCredentials 00:00:26.016 [Pipeline] httpRequest 00:00:26.021 HttpMethod: GET 00:00:26.022 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:26.022 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:26.025 Response Code: HTTP/1.1 200 OK 00:00:26.026 Success: Status code 200 is in the accepted range: 200,404 00:00:26.027 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.495 [Pipeline] sh 00:00:32.784 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:34.719 [Pipeline] sh 00:00:35.007 + git -C dpdk log --oneline -n5 00:00:35.007 eeb0605f11 version: 23.11.0 00:00:35.007 238778122a doc: update release notes for 23.11 00:00:35.007 46aa6b3cfc doc: fix description of RSS features 00:00:35.007 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:35.007 7e421ae345 devtools: support skipping forbid rule check 00:00:35.020 [Pipeline] } 00:00:35.039 [Pipeline] // stage 00:00:35.047 [Pipeline] stage 00:00:35.049 [Pipeline] { (Prepare) 00:00:35.071 [Pipeline] writeFile 00:00:35.089 [Pipeline] sh 00:00:35.376 + logger -p user.info -t JENKINS-CI 00:00:35.390 [Pipeline] sh 00:00:35.702 + logger -p user.info -t JENKINS-CI 00:00:35.716 [Pipeline] sh 00:00:36.005 + cat autorun-spdk.conf 00:00:36.005 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.005 SPDK_TEST_NVMF=1 00:00:36.005 SPDK_TEST_NVME_CLI=1 00:00:36.005 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.005 SPDK_TEST_NVMF_NICS=e810 00:00:36.005 SPDK_TEST_VFIOUSER=1 00:00:36.005 SPDK_RUN_UBSAN=1 00:00:36.005 NET_TYPE=phy 00:00:36.005 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:36.005 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.014 RUN_NIGHTLY=1 00:00:36.017 [Pipeline] readFile 00:00:36.040 [Pipeline] withEnv 00:00:36.042 [Pipeline] { 00:00:36.055 [Pipeline] sh 00:00:36.341 + set -ex 00:00:36.341 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:36.341 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.341 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.341 ++ SPDK_TEST_NVMF=1 00:00:36.341 ++ SPDK_TEST_NVME_CLI=1 00:00:36.341 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.341 ++ SPDK_TEST_NVMF_NICS=e810 00:00:36.341 ++ SPDK_TEST_VFIOUSER=1 00:00:36.341 ++ SPDK_RUN_UBSAN=1 00:00:36.341 ++ NET_TYPE=phy 00:00:36.341 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:36.341 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:36.341 ++ RUN_NIGHTLY=1 00:00:36.341 + case $SPDK_TEST_NVMF_NICS in 00:00:36.341 + DRIVERS=ice 00:00:36.341 + [[ tcp == \r\d\m\a ]] 00:00:36.341 + [[ -n ice ]] 00:00:36.341 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:36.341 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:36.341 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:36.341 rmmod: ERROR: Module irdma is not currently loaded 00:00:36.341 rmmod: ERROR: Module i40iw is not currently loaded 00:00:36.341 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:36.341 + true 00:00:36.341 + for D in $DRIVERS 00:00:36.341 + sudo modprobe ice 00:00:36.341 + exit 0 00:00:36.351 [Pipeline] } 00:00:36.369 [Pipeline] // withEnv 00:00:36.374 [Pipeline] } 00:00:36.390 [Pipeline] // stage 00:00:36.398 [Pipeline] catchError 00:00:36.400 [Pipeline] { 00:00:36.414 [Pipeline] timeout 00:00:36.414 Timeout set to expire in 40 min 00:00:36.416 [Pipeline] { 00:00:36.432 [Pipeline] stage 00:00:36.434 [Pipeline] { (Tests) 00:00:36.450 [Pipeline] sh 00:00:36.738 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.738 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.738 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.738 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:36.738 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.738 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.738 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:36.738 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.738 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.738 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.738 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.738 + source /etc/os-release 00:00:36.738 ++ NAME='Fedora Linux' 00:00:36.738 ++ VERSION='38 (Cloud Edition)' 00:00:36.738 ++ ID=fedora 00:00:36.738 ++ VERSION_ID=38 00:00:36.738 ++ VERSION_CODENAME= 00:00:36.738 ++ PLATFORM_ID=platform:f38 00:00:36.738 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.738 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.738 ++ LOGO=fedora-logo-icon 00:00:36.738 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.738 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.738 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.738 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.738 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.738 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.738 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.738 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.738 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.738 ++ SUPPORT_END=2024-05-14 00:00:36.738 ++ VARIANT='Cloud Edition' 00:00:36.738 ++ VARIANT_ID=cloud 00:00:36.738 + uname -a 00:00:36.738 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.738 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:39.287 Hugepages 00:00:39.287 node hugesize free / total 00:00:39.287 node0 1048576kB 0 / 0 00:00:39.287 node0 2048kB 0 / 0 00:00:39.287 node1 1048576kB 0 / 0 00:00:39.287 node1 2048kB 0 / 0 00:00:39.287 00:00:39.287 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:39.287 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:39.287 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:39.287 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:39.287 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:39.287 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:39.287 + rm -f /tmp/spdk-ld-path 00:00:39.287 + source autorun-spdk.conf 00:00:39.287 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.287 ++ SPDK_TEST_NVMF=1 00:00:39.287 ++ SPDK_TEST_NVME_CLI=1 00:00:39.287 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.287 ++ SPDK_TEST_NVMF_NICS=e810 00:00:39.287 ++ SPDK_TEST_VFIOUSER=1 00:00:39.287 ++ SPDK_RUN_UBSAN=1 00:00:39.287 ++ NET_TYPE=phy 00:00:39.287 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:39.287 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.287 ++ RUN_NIGHTLY=1 00:00:39.287 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:39.287 + [[ -n '' ]] 00:00:39.287 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.287 + for M in /var/spdk/build-*-manifest.txt 00:00:39.287 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:39.287 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:39.287 + for M in /var/spdk/build-*-manifest.txt 00:00:39.287 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:39.287 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:39.287 ++ uname 00:00:39.287 + [[ Linux == \L\i\n\u\x ]] 00:00:39.287 + sudo dmesg -T 00:00:39.550 + sudo dmesg --clear 00:00:39.550 + dmesg_pid=2450671 00:00:39.550 + [[ Fedora Linux == FreeBSD ]] 00:00:39.550 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.550 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:39.550 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:39.550 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.550 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:39.550 + [[ -x /usr/src/fio-static/fio ]] 00:00:39.550 + export FIO_BIN=/usr/src/fio-static/fio 00:00:39.550 + FIO_BIN=/usr/src/fio-static/fio 00:00:39.550 + sudo dmesg -Tw 00:00:39.550 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:39.550 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:39.550 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:39.550 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.550 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:39.550 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:39.550 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.550 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:39.550 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:39.550 Test configuration: 00:00:39.550 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.550 SPDK_TEST_NVMF=1 00:00:39.550 SPDK_TEST_NVME_CLI=1 00:00:39.550 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.550 SPDK_TEST_NVMF_NICS=e810 00:00:39.550 SPDK_TEST_VFIOUSER=1 00:00:39.550 SPDK_RUN_UBSAN=1 00:00:39.550 NET_TYPE=phy 00:00:39.550 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:39.550 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.550 RUN_NIGHTLY=1 09:54:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:39.550 09:54:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:39.550 09:54:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:39.550 09:54:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:39.550 09:54:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.550 09:54:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.550 09:54:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.550 09:54:25 -- paths/export.sh@5 -- $ export PATH 00:00:39.550 09:54:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:39.550 09:54:25 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:39.550 09:54:25 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:39.550 09:54:25 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715759665.XXXXXX 00:00:39.550 09:54:25 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715759665.Y6uHJg 00:00:39.550 09:54:25 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:39.550 09:54:25 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:00:39.550 09:54:25 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:39.550 09:54:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:00:39.550 09:54:25 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:39.550 09:54:25 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:39.550 09:54:25 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:39.550 09:54:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:39.550 09:54:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.550 09:54:25 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:00:39.550 09:54:25 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:39.550 09:54:25 -- pm/common@17 -- $ local monitor 00:00:39.550 09:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.550 09:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.550 09:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.550 09:54:25 -- pm/common@21 -- $ date +%s 00:00:39.550 09:54:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:39.550 09:54:25 -- pm/common@21 -- $ date +%s 00:00:39.550 09:54:25 -- pm/common@25 -- $ sleep 1 00:00:39.550 09:54:25 -- pm/common@21 -- $ date +%s 00:00:39.550 09:54:25 -- pm/common@21 -- $ date +%s 00:00:39.550 09:54:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715759665 00:00:39.550 09:54:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715759665 00:00:39.550 09:54:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715759665 00:00:39.550 09:54:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715759665 00:00:39.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715759665_collect-vmstat.pm.log 00:00:39.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715759665_collect-cpu-load.pm.log 00:00:39.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715759665_collect-cpu-temp.pm.log 00:00:39.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715759665_collect-bmc-pm.bmc.pm.log 00:00:40.760 09:54:26 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:40.760 09:54:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:40.760 09:54:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:40.760 09:54:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.760 09:54:26 -- spdk/autobuild.sh@16 -- $ date -u 00:00:40.760 Wed May 15 07:54:26 AM UTC 2024 00:00:40.760 09:54:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:40.760 v24.05-pre-658-g4506c0c36 00:00:40.760 09:54:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:40.760 09:54:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:40.760 09:54:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:40.760 09:54:26 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:00:40.760 09:54:26 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:40.760 09:54:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.760 ************************************ 00:00:40.760 START TEST ubsan 00:00:40.760 ************************************ 00:00:40.760 09:54:26 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:00:40.760 using ubsan 00:00:40.760 00:00:40.761 real 0m0.000s 00:00:40.761 user 0m0.000s 00:00:40.761 sys 0m0.000s 00:00:40.761 09:54:26 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:00:40.761 09:54:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:40.761 ************************************ 00:00:40.761 END TEST ubsan 00:00:40.761 ************************************ 00:00:40.761 09:54:26 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:00:40.761 09:54:26 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:00:40.761 09:54:26 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:00:40.761 09:54:26 -- common/autotest_common.sh@1098 -- $ '[' 2 -le 1 ']' 00:00:40.761 09:54:26 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:00:40.761 09:54:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.761 ************************************ 00:00:40.761 START TEST build_native_dpdk 00:00:40.761 ************************************ 00:00:40.761 09:54:26 build_native_dpdk -- common/autotest_common.sh@1122 -- $ _build_native_dpdk 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:00:40.761 eeb0605f11 version: 23.11.0 00:00:40.761 238778122a doc: update release notes for 23.11 00:00:40.761 46aa6b3cfc doc: fix description of RSS features 00:00:40.761 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:40.761 7e421ae345 devtools: support skipping forbid rule check 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:00:40.761 09:54:26 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:00:40.761 patching file config/rte_config.h 00:00:40.761 Hunk #1 succeeded at 60 (offset 1 line). 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:00:40.761 09:54:26 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:46.067 The Meson build system 00:00:46.067 Version: 1.3.1 00:00:46.067 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:00:46.067 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:00:46.067 Build type: native build 00:00:46.067 Program cat found: YES (/usr/bin/cat) 00:00:46.067 Project name: DPDK 00:00:46.067 Project version: 23.11.0 00:00:46.067 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:46.067 C linker for the host machine: gcc ld.bfd 2.39-16 00:00:46.067 Host machine cpu family: x86_64 00:00:46.067 Host machine cpu: x86_64 00:00:46.067 Message: ## Building in Developer Mode ## 00:00:46.067 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:46.067 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:00:46.067 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:00:46.067 Program python3 found: YES (/usr/bin/python3) 00:00:46.067 Program cat found: YES (/usr/bin/cat) 00:00:46.067 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:00:46.067 Compiler for C supports arguments -march=native: YES 00:00:46.067 Checking for size of "void *" : 8 00:00:46.067 Checking for size of "void *" : 8 (cached) 00:00:46.067 Library m found: YES 00:00:46.067 Library numa found: YES 00:00:46.067 Has header "numaif.h" : YES 00:00:46.067 Library fdt found: NO 00:00:46.067 Library execinfo found: NO 00:00:46.067 Has header "execinfo.h" : YES 00:00:46.067 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:46.067 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:46.067 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:46.067 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:46.067 Run-time dependency openssl found: YES 3.0.9 00:00:46.067 Run-time dependency libpcap found: YES 1.10.4 00:00:46.067 Has header "pcap.h" with dependency libpcap: YES 00:00:46.067 Compiler for C supports arguments -Wcast-qual: YES 00:00:46.067 Compiler for C supports arguments -Wdeprecated: YES 00:00:46.067 Compiler for C supports arguments -Wformat: YES 00:00:46.067 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:46.067 Compiler for C supports arguments -Wformat-security: NO 00:00:46.067 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:46.067 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:46.067 Compiler for C supports arguments -Wnested-externs: YES 00:00:46.067 Compiler for C supports arguments -Wold-style-definition: YES 00:00:46.067 Compiler for C supports arguments -Wpointer-arith: YES 00:00:46.067 Compiler for C supports arguments -Wsign-compare: YES 00:00:46.067 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:46.067 Compiler for C supports arguments -Wundef: YES 00:00:46.067 Compiler for C supports arguments -Wwrite-strings: YES 00:00:46.067 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:46.067 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:46.067 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:46.067 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:46.067 Program objdump found: YES (/usr/bin/objdump) 00:00:46.067 Compiler for C supports arguments -mavx512f: YES 00:00:46.067 Checking if "AVX512 checking" compiles: YES 00:00:46.067 Fetching value of define "__SSE4_2__" : 1 00:00:46.067 Fetching value of define "__AES__" : 1 00:00:46.067 Fetching value of define "__AVX__" : 1 00:00:46.067 Fetching value of define "__AVX2__" : 1 00:00:46.067 Fetching value of define "__AVX512BW__" : 1 00:00:46.067 Fetching value of define "__AVX512CD__" : 1 00:00:46.067 Fetching value of define "__AVX512DQ__" : 1 00:00:46.067 Fetching value of define "__AVX512F__" : 1 00:00:46.067 Fetching value of define "__AVX512VL__" : 1 00:00:46.067 Fetching value of define "__PCLMUL__" : 1 00:00:46.067 Fetching value of define "__RDRND__" : 1 00:00:46.067 Fetching value of define "__RDSEED__" : 1 00:00:46.067 Fetching value of define "__VPCLMULQDQ__" : 1 00:00:46.067 Fetching value of define "__znver1__" : (undefined) 00:00:46.067 Fetching value of define "__znver2__" : (undefined) 00:00:46.067 Fetching value of define "__znver3__" : (undefined) 00:00:46.067 Fetching value of define "__znver4__" : (undefined) 00:00:46.067 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:46.067 Message: lib/log: Defining dependency "log" 00:00:46.067 Message: lib/kvargs: Defining dependency "kvargs" 00:00:46.067 Message: lib/telemetry: Defining dependency "telemetry" 00:00:46.067 Checking for function "getentropy" : NO 00:00:46.067 Message: lib/eal: Defining dependency "eal" 00:00:46.067 Message: lib/ring: Defining dependency "ring" 00:00:46.067 Message: lib/rcu: Defining dependency "rcu" 00:00:46.067 Message: lib/mempool: Defining dependency "mempool" 00:00:46.067 Message: lib/mbuf: Defining dependency "mbuf" 00:00:46.067 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:46.067 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:46.067 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:46.067 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:46.067 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:46.067 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:00:46.067 Compiler for C supports arguments -mpclmul: YES 00:00:46.067 Compiler for C supports arguments -maes: YES 00:00:46.067 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:46.068 Compiler for C supports arguments -mavx512bw: YES 00:00:46.068 Compiler for C supports arguments -mavx512dq: YES 00:00:46.068 Compiler for C supports arguments -mavx512vl: YES 00:00:46.068 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:46.068 Compiler for C supports arguments -mavx2: YES 00:00:46.068 Compiler for C supports arguments -mavx: YES 00:00:46.068 Message: lib/net: Defining dependency "net" 00:00:46.068 Message: lib/meter: Defining dependency "meter" 00:00:46.068 Message: lib/ethdev: Defining dependency "ethdev" 00:00:46.068 Message: lib/pci: Defining dependency "pci" 00:00:46.068 Message: lib/cmdline: Defining dependency "cmdline" 00:00:46.068 Message: lib/metrics: Defining dependency "metrics" 00:00:46.068 Message: lib/hash: Defining dependency "hash" 00:00:46.068 Message: lib/timer: Defining dependency "timer" 00:00:46.068 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512VL__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512CD__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:46.068 Message: lib/acl: Defining dependency "acl" 00:00:46.068 Message: lib/bbdev: Defining dependency "bbdev" 00:00:46.068 Message: lib/bitratestats: Defining dependency "bitratestats" 00:00:46.068 Run-time dependency libelf found: YES 0.190 00:00:46.068 Message: lib/bpf: Defining dependency "bpf" 00:00:46.068 Message: lib/cfgfile: Defining dependency "cfgfile" 00:00:46.068 Message: lib/compressdev: Defining dependency "compressdev" 00:00:46.068 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:46.068 Message: lib/distributor: Defining dependency "distributor" 00:00:46.068 Message: lib/dmadev: Defining dependency "dmadev" 00:00:46.068 Message: lib/efd: Defining dependency "efd" 00:00:46.068 Message: lib/eventdev: Defining dependency "eventdev" 00:00:46.068 Message: lib/dispatcher: Defining dependency "dispatcher" 00:00:46.068 Message: lib/gpudev: Defining dependency "gpudev" 00:00:46.068 Message: lib/gro: Defining dependency "gro" 00:00:46.068 Message: lib/gso: Defining dependency "gso" 00:00:46.068 Message: lib/ip_frag: Defining dependency "ip_frag" 00:00:46.068 Message: lib/jobstats: Defining dependency "jobstats" 00:00:46.068 Message: lib/latencystats: Defining dependency "latencystats" 00:00:46.068 Message: lib/lpm: Defining dependency "lpm" 00:00:46.068 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512IFMA__" : 1 00:00:46.068 Message: lib/member: Defining dependency "member" 00:00:46.068 Message: lib/pcapng: Defining dependency "pcapng" 00:00:46.068 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:46.068 Message: lib/power: Defining dependency "power" 00:00:46.068 Message: lib/rawdev: Defining dependency "rawdev" 00:00:46.068 Message: lib/regexdev: Defining dependency "regexdev" 00:00:46.068 Message: lib/mldev: Defining dependency "mldev" 00:00:46.068 Message: lib/rib: Defining dependency "rib" 00:00:46.068 Message: lib/reorder: Defining dependency "reorder" 00:00:46.068 Message: lib/sched: Defining dependency "sched" 00:00:46.068 Message: lib/security: Defining dependency "security" 00:00:46.068 Message: lib/stack: Defining dependency "stack" 00:00:46.068 Has header "linux/userfaultfd.h" : YES 00:00:46.068 Has header "linux/vduse.h" : YES 00:00:46.068 Message: lib/vhost: Defining dependency "vhost" 00:00:46.068 Message: lib/ipsec: Defining dependency "ipsec" 00:00:46.068 Message: lib/pdcp: Defining dependency "pdcp" 00:00:46.068 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:00:46.068 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:46.068 Message: lib/fib: Defining dependency "fib" 00:00:46.068 Message: lib/port: Defining dependency "port" 00:00:46.068 Message: lib/pdump: Defining dependency "pdump" 00:00:46.068 Message: lib/table: Defining dependency "table" 00:00:46.068 Message: lib/pipeline: Defining dependency "pipeline" 00:00:46.068 Message: lib/graph: Defining dependency "graph" 00:00:46.068 Message: lib/node: Defining dependency "node" 00:00:46.068 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:46.068 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:46.068 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:47.468 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:47.468 Compiler for C supports arguments -Wno-sign-compare: YES 00:00:47.468 Compiler for C supports arguments -Wno-unused-value: YES 00:00:47.468 Compiler for C supports arguments -Wno-format: YES 00:00:47.468 Compiler for C supports arguments -Wno-format-security: YES 00:00:47.468 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:00:47.468 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:00:47.468 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:00:47.468 Compiler for C supports arguments -Wno-unused-parameter: YES 00:00:47.468 Fetching value of define "__AVX512F__" : 1 (cached) 00:00:47.468 Fetching value of define "__AVX512BW__" : 1 (cached) 00:00:47.468 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:47.468 Compiler for C supports arguments -mavx512bw: YES (cached) 00:00:47.468 Compiler for C supports arguments -march=skylake-avx512: YES 00:00:47.468 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:00:47.468 Has header "sys/epoll.h" : YES 00:00:47.468 Program doxygen found: YES (/usr/bin/doxygen) 00:00:47.468 Configuring doxy-api-html.conf using configuration 00:00:47.468 Configuring doxy-api-man.conf using configuration 00:00:47.468 Program mandb found: YES (/usr/bin/mandb) 00:00:47.468 Program sphinx-build found: NO 00:00:47.468 Configuring rte_build_config.h using configuration 00:00:47.468 Message: 00:00:47.468 ================= 00:00:47.468 Applications Enabled 00:00:47.468 ================= 00:00:47.468 00:00:47.468 apps: 00:00:47.468 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:00:47.468 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:00:47.468 test-pmd, test-regex, test-sad, test-security-perf, 00:00:47.468 00:00:47.468 Message: 00:00:47.468 ================= 00:00:47.468 Libraries Enabled 00:00:47.468 ================= 00:00:47.468 00:00:47.468 libs: 00:00:47.468 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:47.468 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:00:47.468 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:00:47.468 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:00:47.468 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:00:47.468 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:00:47.468 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:00:47.468 00:00:47.468 00:00:47.468 Message: 00:00:47.468 =============== 00:00:47.468 Drivers Enabled 00:00:47.468 =============== 00:00:47.468 00:00:47.468 common: 00:00:47.468 00:00:47.468 bus: 00:00:47.468 pci, vdev, 00:00:47.468 mempool: 00:00:47.468 ring, 00:00:47.468 dma: 00:00:47.468 00:00:47.468 net: 00:00:47.468 i40e, 00:00:47.468 raw: 00:00:47.468 00:00:47.468 crypto: 00:00:47.468 00:00:47.468 compress: 00:00:47.468 00:00:47.468 regex: 00:00:47.468 00:00:47.468 ml: 00:00:47.468 00:00:47.468 vdpa: 00:00:47.468 00:00:47.468 event: 00:00:47.468 00:00:47.468 baseband: 00:00:47.468 00:00:47.468 gpu: 00:00:47.468 00:00:47.468 00:00:47.468 Message: 00:00:47.468 ================= 00:00:47.468 Content Skipped 00:00:47.468 ================= 00:00:47.468 00:00:47.468 apps: 00:00:47.468 00:00:47.468 libs: 00:00:47.468 00:00:47.468 drivers: 00:00:47.468 common/cpt: not in enabled drivers build config 00:00:47.468 common/dpaax: not in enabled drivers build config 00:00:47.468 common/iavf: not in enabled drivers build config 00:00:47.468 common/idpf: not in enabled drivers build config 00:00:47.468 common/mvep: not in enabled drivers build config 00:00:47.468 common/octeontx: not in enabled drivers build config 00:00:47.468 bus/auxiliary: not in enabled drivers build config 00:00:47.468 bus/cdx: not in enabled drivers build config 00:00:47.468 bus/dpaa: not in enabled drivers build config 00:00:47.468 bus/fslmc: not in enabled drivers build config 00:00:47.468 bus/ifpga: not in enabled drivers build config 00:00:47.468 bus/platform: not in enabled drivers build config 00:00:47.468 bus/vmbus: not in enabled drivers build config 00:00:47.468 common/cnxk: not in enabled drivers build config 00:00:47.468 common/mlx5: not in enabled drivers build config 00:00:47.468 common/nfp: not in enabled drivers build config 00:00:47.468 common/qat: not in enabled drivers build config 00:00:47.468 common/sfc_efx: not in enabled drivers build config 00:00:47.468 mempool/bucket: not in enabled drivers build config 00:00:47.468 mempool/cnxk: not in enabled drivers build config 00:00:47.468 mempool/dpaa: not in enabled drivers build config 00:00:47.468 mempool/dpaa2: not in enabled drivers build config 00:00:47.468 mempool/octeontx: not in enabled drivers build config 00:00:47.468 mempool/stack: not in enabled drivers build config 00:00:47.468 dma/cnxk: not in enabled drivers build config 00:00:47.468 dma/dpaa: not in enabled drivers build config 00:00:47.468 dma/dpaa2: not in enabled drivers build config 00:00:47.468 dma/hisilicon: not in enabled drivers build config 00:00:47.468 dma/idxd: not in enabled drivers build config 00:00:47.468 dma/ioat: not in enabled drivers build config 00:00:47.468 dma/skeleton: not in enabled drivers build config 00:00:47.468 net/af_packet: not in enabled drivers build config 00:00:47.468 net/af_xdp: not in enabled drivers build config 00:00:47.468 net/ark: not in enabled drivers build config 00:00:47.468 net/atlantic: not in enabled drivers build config 00:00:47.468 net/avp: not in enabled drivers build config 00:00:47.468 net/axgbe: not in enabled drivers build config 00:00:47.468 net/bnx2x: not in enabled drivers build config 00:00:47.468 net/bnxt: not in enabled drivers build config 00:00:47.468 net/bonding: not in enabled drivers build config 00:00:47.468 net/cnxk: not in enabled drivers build config 00:00:47.468 net/cpfl: not in enabled drivers build config 00:00:47.468 net/cxgbe: not in enabled drivers build config 00:00:47.468 net/dpaa: not in enabled drivers build config 00:00:47.468 net/dpaa2: not in enabled drivers build config 00:00:47.468 net/e1000: not in enabled drivers build config 00:00:47.468 net/ena: not in enabled drivers build config 00:00:47.468 net/enetc: not in enabled drivers build config 00:00:47.468 net/enetfec: not in enabled drivers build config 00:00:47.468 net/enic: not in enabled drivers build config 00:00:47.468 net/failsafe: not in enabled drivers build config 00:00:47.468 net/fm10k: not in enabled drivers build config 00:00:47.468 net/gve: not in enabled drivers build config 00:00:47.468 net/hinic: not in enabled drivers build config 00:00:47.468 net/hns3: not in enabled drivers build config 00:00:47.468 net/iavf: not in enabled drivers build config 00:00:47.468 net/ice: not in enabled drivers build config 00:00:47.468 net/idpf: not in enabled drivers build config 00:00:47.468 net/igc: not in enabled drivers build config 00:00:47.468 net/ionic: not in enabled drivers build config 00:00:47.468 net/ipn3ke: not in enabled drivers build config 00:00:47.468 net/ixgbe: not in enabled drivers build config 00:00:47.468 net/mana: not in enabled drivers build config 00:00:47.468 net/memif: not in enabled drivers build config 00:00:47.468 net/mlx4: not in enabled drivers build config 00:00:47.468 net/mlx5: not in enabled drivers build config 00:00:47.468 net/mvneta: not in enabled drivers build config 00:00:47.468 net/mvpp2: not in enabled drivers build config 00:00:47.468 net/netvsc: not in enabled drivers build config 00:00:47.468 net/nfb: not in enabled drivers build config 00:00:47.468 net/nfp: not in enabled drivers build config 00:00:47.468 net/ngbe: not in enabled drivers build config 00:00:47.468 net/null: not in enabled drivers build config 00:00:47.468 net/octeontx: not in enabled drivers build config 00:00:47.468 net/octeon_ep: not in enabled drivers build config 00:00:47.468 net/pcap: not in enabled drivers build config 00:00:47.468 net/pfe: not in enabled drivers build config 00:00:47.468 net/qede: not in enabled drivers build config 00:00:47.468 net/ring: not in enabled drivers build config 00:00:47.468 net/sfc: not in enabled drivers build config 00:00:47.468 net/softnic: not in enabled drivers build config 00:00:47.468 net/tap: not in enabled drivers build config 00:00:47.468 net/thunderx: not in enabled drivers build config 00:00:47.468 net/txgbe: not in enabled drivers build config 00:00:47.468 net/vdev_netvsc: not in enabled drivers build config 00:00:47.468 net/vhost: not in enabled drivers build config 00:00:47.468 net/virtio: not in enabled drivers build config 00:00:47.468 net/vmxnet3: not in enabled drivers build config 00:00:47.468 raw/cnxk_bphy: not in enabled drivers build config 00:00:47.468 raw/cnxk_gpio: not in enabled drivers build config 00:00:47.468 raw/dpaa2_cmdif: not in enabled drivers build config 00:00:47.468 raw/ifpga: not in enabled drivers build config 00:00:47.468 raw/ntb: not in enabled drivers build config 00:00:47.468 raw/skeleton: not in enabled drivers build config 00:00:47.468 crypto/armv8: not in enabled drivers build config 00:00:47.468 crypto/bcmfs: not in enabled drivers build config 00:00:47.468 crypto/caam_jr: not in enabled drivers build config 00:00:47.468 crypto/ccp: not in enabled drivers build config 00:00:47.468 crypto/cnxk: not in enabled drivers build config 00:00:47.468 crypto/dpaa_sec: not in enabled drivers build config 00:00:47.468 crypto/dpaa2_sec: not in enabled drivers build config 00:00:47.468 crypto/ipsec_mb: not in enabled drivers build config 00:00:47.468 crypto/mlx5: not in enabled drivers build config 00:00:47.468 crypto/mvsam: not in enabled drivers build config 00:00:47.468 crypto/nitrox: not in enabled drivers build config 00:00:47.468 crypto/null: not in enabled drivers build config 00:00:47.468 crypto/octeontx: not in enabled drivers build config 00:00:47.468 crypto/openssl: not in enabled drivers build config 00:00:47.468 crypto/scheduler: not in enabled drivers build config 00:00:47.468 crypto/uadk: not in enabled drivers build config 00:00:47.468 crypto/virtio: not in enabled drivers build config 00:00:47.468 compress/isal: not in enabled drivers build config 00:00:47.468 compress/mlx5: not in enabled drivers build config 00:00:47.468 compress/octeontx: not in enabled drivers build config 00:00:47.468 compress/zlib: not in enabled drivers build config 00:00:47.468 regex/mlx5: not in enabled drivers build config 00:00:47.468 regex/cn9k: not in enabled drivers build config 00:00:47.468 ml/cnxk: not in enabled drivers build config 00:00:47.468 vdpa/ifc: not in enabled drivers build config 00:00:47.468 vdpa/mlx5: not in enabled drivers build config 00:00:47.468 vdpa/nfp: not in enabled drivers build config 00:00:47.469 vdpa/sfc: not in enabled drivers build config 00:00:47.469 event/cnxk: not in enabled drivers build config 00:00:47.469 event/dlb2: not in enabled drivers build config 00:00:47.469 event/dpaa: not in enabled drivers build config 00:00:47.469 event/dpaa2: not in enabled drivers build config 00:00:47.469 event/dsw: not in enabled drivers build config 00:00:47.469 event/opdl: not in enabled drivers build config 00:00:47.469 event/skeleton: not in enabled drivers build config 00:00:47.469 event/sw: not in enabled drivers build config 00:00:47.469 event/octeontx: not in enabled drivers build config 00:00:47.469 baseband/acc: not in enabled drivers build config 00:00:47.469 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:00:47.469 baseband/fpga_lte_fec: not in enabled drivers build config 00:00:47.469 baseband/la12xx: not in enabled drivers build config 00:00:47.469 baseband/null: not in enabled drivers build config 00:00:47.469 baseband/turbo_sw: not in enabled drivers build config 00:00:47.469 gpu/cuda: not in enabled drivers build config 00:00:47.469 00:00:47.469 00:00:47.469 Build targets in project: 215 00:00:47.469 00:00:47.469 DPDK 23.11.0 00:00:47.469 00:00:47.469 User defined options 00:00:47.469 libdir : lib 00:00:47.469 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:47.469 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:00:47.469 c_link_args : 00:00:47.469 enable_docs : false 00:00:47.469 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:00:47.469 enable_kmods : false 00:00:47.469 machine : native 00:00:47.469 tests : false 00:00:47.469 00:00:47.469 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:47.469 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:00:47.469 09:54:33 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:00:47.469 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:00:47.758 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:47.758 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:47.758 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:47.758 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:47.758 [5/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:47.758 [6/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:47.758 [7/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:47.758 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:47.758 [9/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:47.758 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:47.758 [11/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:47.758 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:47.758 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:47.758 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:47.758 [15/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:47.758 [16/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:47.758 [17/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:47.758 [18/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:47.758 [19/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:47.758 [20/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:47.758 [21/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:47.758 [22/705] Linking static target lib/librte_kvargs.a 00:00:48.019 [23/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:48.019 [24/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:48.019 [25/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:48.019 [26/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:48.019 [27/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:48.019 [28/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:48.019 [29/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:48.019 [30/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:48.019 [31/705] Linking static target lib/librte_log.a 00:00:48.019 [32/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:48.019 [33/705] Linking static target lib/librte_pci.a 00:00:48.019 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:48.019 [35/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:48.019 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:48.282 [37/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:48.283 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:48.283 [39/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.283 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:48.283 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:48.283 [42/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:48.283 [43/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:48.283 [44/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.283 [45/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:48.283 [46/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:48.283 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:48.283 [48/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:00:48.283 [49/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:48.283 [50/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:48.283 [51/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:48.283 [52/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:48.283 [53/705] Linking static target lib/librte_cfgfile.a 00:00:48.283 [54/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:48.283 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:48.283 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:48.548 [57/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:00:48.548 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:48.548 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:48.548 [60/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:48.548 [61/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:48.548 [62/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:48.548 [63/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:48.548 [64/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:48.548 [65/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:48.548 [66/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:48.548 [67/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:48.548 [68/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:48.548 [69/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:48.548 [70/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:48.548 [71/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:48.548 [72/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:00:48.548 [73/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:48.548 [74/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:48.548 [75/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:48.548 [76/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:00:48.548 [77/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:48.548 [78/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:48.548 [79/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:48.548 [80/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:48.548 [81/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:48.548 [82/705] Linking static target lib/librte_cmdline.a 00:00:48.548 [83/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:00:48.548 [84/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:48.548 [85/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:48.548 [86/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:48.548 [87/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:00:48.548 [88/705] Linking static target lib/librte_meter.a 00:00:48.548 [89/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:48.548 [90/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:48.548 [91/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:48.548 [92/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:48.548 [93/705] Linking static target lib/librte_ring.a 00:00:48.548 [94/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:48.548 [95/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:00:48.548 [96/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:48.548 [97/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:00:48.548 [98/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:48.548 [99/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:48.548 [100/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:00:48.548 [101/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:00:48.548 [102/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:48.548 [103/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:48.548 [104/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:48.548 [105/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:00:48.548 [106/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:48.548 [107/705] Linking static target lib/librte_bitratestats.a 00:00:48.548 [108/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:00:48.548 [109/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:00:48.548 [110/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:00:48.548 [111/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:00:48.548 [112/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:48.548 [113/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:48.548 [114/705] Linking static target lib/librte_metrics.a 00:00:48.548 [115/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:48.548 [116/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:48.548 [117/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:48.548 [118/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:48.548 [119/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:48.807 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:48.807 [121/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:48.807 [122/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:00:48.807 [123/705] Linking static target lib/librte_compressdev.a 00:00:48.807 [124/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:00:48.807 [125/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:48.807 [126/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:48.807 [127/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:00:48.807 [128/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:00:48.807 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:48.807 [130/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:48.807 [131/705] Linking static target lib/librte_net.a 00:00:48.807 [132/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:00:48.807 [133/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:48.807 [134/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:48.807 [135/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:48.807 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:48.807 [137/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:48.807 [138/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:48.807 [139/705] Linking target lib/librte_log.so.24.0 00:00:48.807 [140/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:48.807 [141/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:48.807 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:48.807 [143/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:48.807 [144/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:49.065 [145/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:00:49.065 [146/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:00:49.065 [147/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.065 [148/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.065 [149/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:00:49.065 [150/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:00:49.065 [151/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:00:49.065 [152/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:49.065 [153/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:00:49.065 [154/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.065 [155/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:49.065 [156/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.065 [157/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:00:49.065 [158/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:00:49.065 [159/705] Linking static target lib/librte_dispatcher.a 00:00:49.065 [160/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:49.065 [161/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:49.065 [162/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:00:49.065 [163/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:49.065 [164/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:00:49.065 [165/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:00:49.065 [166/705] Linking static target lib/librte_timer.a 00:00:49.065 [167/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:00:49.065 [168/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:00:49.065 [169/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:00:49.065 [170/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:49.065 [171/705] Linking static target lib/librte_jobstats.a 00:00:49.065 [172/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:49.065 [173/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:00:49.065 [174/705] Linking static target lib/librte_mempool.a 00:00:49.065 [175/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:49.065 [176/705] Linking static target lib/librte_gpudev.a 00:00:49.065 [177/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:00:49.065 [178/705] Linking static target lib/librte_dmadev.a 00:00:49.065 [179/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:00:49.065 [180/705] Linking target lib/librte_kvargs.so.24.0 00:00:49.065 [181/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:00:49.065 [182/705] Linking static target lib/librte_bbdev.a 00:00:49.065 [183/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:00:49.065 [184/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:00:49.066 [185/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:00:49.066 [186/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:49.066 [187/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:49.066 [188/705] Linking static target lib/librte_latencystats.a 00:00:49.066 [189/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:49.066 [190/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:00:49.066 [191/705] Linking static target lib/librte_gro.a 00:00:49.066 [192/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:49.066 [193/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:49.066 [194/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:00:49.066 [195/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:00:49.066 [196/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:00:49.066 [197/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:00:49.327 [198/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:00:49.327 [199/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:00:49.327 [200/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:00:49.327 [201/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:00:49.327 [202/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.327 [203/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:00:49.327 [204/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:49.327 [205/705] Linking static target lib/librte_stack.a 00:00:49.327 [206/705] Linking static target lib/librte_distributor.a 00:00:49.327 [207/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:00:49.327 [208/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:00:49.327 [209/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:49.327 [210/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.327 [211/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:00:49.327 [212/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:49.327 [213/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:49.327 [214/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:49.327 [215/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:49.327 [216/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:00:49.327 [217/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:49.327 [218/705] Linking static target lib/librte_gso.a 00:00:49.327 [219/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:00:49.327 [220/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:00:49.327 [221/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:49.327 [222/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:00:49.327 [223/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:00:49.327 [224/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:00:49.327 [225/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:00:49.327 [226/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:00:49.327 [227/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:49.327 [228/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:00:49.327 [229/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:00:49.327 [230/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:00:49.588 [231/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:00:49.588 [232/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:00:49.588 [233/705] Linking static target lib/librte_telemetry.a 00:00:49.588 [234/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:00:49.588 [235/705] Linking static target lib/librte_regexdev.a 00:00:49.588 [236/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:00:49.588 [237/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:00:49.588 [238/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:00:49.588 [239/705] Linking static target lib/librte_mldev.a 00:00:49.588 [240/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:00:49.588 [241/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:00:49.588 [242/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:00:49.588 [243/705] Linking static target lib/librte_rawdev.a 00:00:49.588 [244/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:49.588 [245/705] Linking static target lib/librte_ip_frag.a 00:00:49.588 [246/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:49.588 [247/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:00:49.588 [248/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:49.588 [249/705] Linking static target lib/librte_rcu.a 00:00:49.588 [250/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:49.588 [251/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:00:49.588 [252/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:00:49.588 [253/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [254/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:00:49.588 [255/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:49.588 [256/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:00:49.588 [257/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [258/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:49.588 [259/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [260/705] Linking static target lib/librte_eal.a 00:00:49.588 [261/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:00:49.588 [262/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:00:49.588 [263/705] Linking static target lib/librte_power.a 00:00:49.588 [264/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:00:49.588 [265/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:00:49.588 [266/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [267/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [268/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [269/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [270/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.588 [271/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:00:49.588 [272/705] Linking static target lib/librte_reorder.a 00:00:49.588 [273/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:00:49.588 [274/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:00:49.588 [275/705] Linking static target lib/librte_security.a 00:00:49.588 [276/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:00:49.588 [277/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:00:49.588 [278/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.850 [279/705] Linking static target lib/librte_bpf.a 00:00:49.850 [280/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:00:49.850 [281/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:00:49.850 [282/705] Linking static target lib/librte_pcapng.a 00:00:49.850 [283/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:00:49.850 [284/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:00:49.850 [285/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:00:49.850 [286/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:00:49.850 [287/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:00:49.850 [288/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:00:49.850 [289/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:00:49.850 [290/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:00:49.850 [291/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:00:49.850 [292/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:00:49.850 [293/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:00:49.850 [294/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:00:49.850 [295/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:00:49.850 [296/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:00:49.850 [297/705] Linking static target lib/librte_rib.a 00:00:49.850 [298/705] Linking static target lib/librte_mbuf.a 00:00:49.850 [299/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:00:49.850 [300/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:00:49.850 [301/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:00:49.850 [302/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:00:49.850 [303/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:00:49.850 [304/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:00:49.850 [305/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:00:49.850 [306/705] Linking static target lib/librte_efd.a 00:00:49.850 [307/705] Linking static target lib/librte_lpm.a 00:00:49.850 [308/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:00:49.850 [309/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:00:49.850 [310/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:00:49.850 [311/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:00:49.850 [312/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:00:49.850 [313/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:00:49.850 [314/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:00:49.850 [315/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:00:50.115 [316/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:00:50.115 [317/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:00:50.115 [318/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:00:50.115 [319/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:00:50.115 [320/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:00:50.115 [321/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:50.115 [322/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:00:50.115 [323/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:00:50.115 [324/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [325/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:00:50.115 [326/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:00:50.115 [327/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:00:50.115 [328/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:00:50.115 [329/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:00:50.115 [330/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [331/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:00:50.115 [332/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:00:50.115 [333/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:00:50.115 [334/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:00:50.115 [335/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:00:50.115 [336/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:00:50.115 [337/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:00:50.115 [338/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:00:50.115 [339/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [340/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [341/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [342/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [343/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:00:50.115 [344/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [345/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.115 [346/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:00:50.115 [347/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:00:50.115 [348/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:00:50.115 [349/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:00:50.116 [350/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:00:50.374 [351/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:00:50.374 [352/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:00:50.374 [353/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:00:50.374 [354/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:00:50.374 [355/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:00:50.374 [356/705] Linking target lib/librte_telemetry.so.24.0 00:00:50.374 [357/705] Linking static target lib/librte_graph.a 00:00:50.374 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.374 [359/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:00:50.374 [360/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:00:50.374 [361/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.374 [362/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:00:50.374 [363/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:00:50.374 [364/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:00:50.374 [365/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:00:50.374 [366/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:00:50.374 [367/705] Linking static target lib/librte_fib.a 00:00:50.374 [368/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:00:50.374 [369/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:00:50.374 [370/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:00:50.374 [371/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:00:50.374 [372/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:00:50.374 [373/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.374 [374/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:00:50.374 [375/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:00:50.374 [376/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:00:50.374 [377/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:00:50.374 [378/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:00:50.374 [379/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:00:50.374 [380/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:00:50.374 [381/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:00:50.674 [382/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:00:50.674 [383/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:00:50.674 [384/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:00:50.674 [385/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:00:50.674 [386/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.674 [387/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:00:50.674 [388/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:00:50.674 [389/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:00:50.674 [390/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:00:50.674 [391/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:00:50.674 [392/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:00:50.674 [393/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:00:50.674 [394/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:00:50.674 [395/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.674 [396/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.674 [397/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:00:50.674 [398/705] Linking static target lib/librte_pdump.a 00:00:50.674 [399/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:00:50.674 [400/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:00:50.674 [401/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:00:50.674 [402/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:00:50.674 [403/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:00:50.674 [404/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:00:50.674 [405/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:00:50.674 [406/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:00:50.674 [407/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:00:50.674 [408/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.674 [409/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:00:50.674 [410/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:00:50.674 [411/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:00:50.674 [412/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.674 [413/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:00:50.674 [414/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:00:50.955 [415/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:00:50.955 [416/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:50.955 [417/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:00:50.955 [418/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:00:50.955 [419/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:00:50.955 [420/705] Linking static target drivers/librte_bus_vdev.a 00:00:50.955 [421/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:00:50.955 [422/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:00:50.955 [423/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:00:50.956 [424/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:00:50.956 [425/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:00:50.956 [426/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.956 [427/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.956 [428/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:00:50.956 [429/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:00:50.956 [430/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:00:50.956 [431/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:00:50.956 [432/705] Linking static target lib/librte_sched.a 00:00:50.956 [433/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:00:50.956 [434/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:50.956 [435/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:00:50.956 [436/705] Linking static target drivers/librte_bus_pci.a 00:00:50.956 [437/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:00:50.956 [438/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:00:50.956 [439/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.956 [440/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:00:50.956 [441/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:00:50.956 [442/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:00:50.956 [443/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:00:50.956 [444/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:00:50.956 [445/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:00:50.956 [446/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:00:50.956 [447/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:00:50.956 [448/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:00:50.956 [449/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:00:50.956 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:00:50.956 [451/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:00:50.956 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.956 [453/705] Linking static target lib/librte_table.a 00:00:50.956 [454/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:00:50.956 [455/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:00:50.956 [456/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:00:50.956 [457/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:00:50.956 [458/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:00:50.956 [459/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:00:50.956 [460/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:00:50.956 [461/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:00:50.956 [462/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:00:50.956 [463/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:00:50.956 [464/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:00:50.956 [465/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:00:50.956 [466/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:50.956 [467/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:00:50.956 [468/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:00:50.956 [469/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:00:51.217 [470/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:00:51.217 [471/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:00:51.217 [472/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:00:51.217 [473/705] Linking static target lib/librte_cryptodev.a 00:00:51.217 [474/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:00:51.217 [475/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:00:51.217 [476/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:00:51.217 [477/705] Linking static target lib/librte_ipsec.a 00:00:51.217 [478/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:00:51.217 [479/705] Linking static target lib/librte_node.a 00:00:51.217 [480/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:00:51.217 [481/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:00:51.217 [482/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:00:51.217 [483/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:00:51.217 [484/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:00:51.217 [485/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:00:51.217 [486/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.218 [487/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:00:51.218 [488/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:00:51.218 [489/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:00:51.218 [490/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:00:51.218 [491/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:00:51.218 [492/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:00:51.218 [493/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:00:51.218 [494/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:00:51.218 [495/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:00:51.218 [496/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:00:51.218 [497/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:00:51.218 [498/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:00:51.218 [499/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:00:51.218 [500/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:51.218 [501/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:00:51.218 [502/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:00:51.218 [503/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:00:51.218 [504/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:00:51.218 [505/705] Linking static target drivers/librte_mempool_ring.a 00:00:51.218 [506/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:00:51.218 [507/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:00:51.218 [508/705] Linking static target lib/librte_port.a 00:00:51.218 [509/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:00:51.218 [510/705] Linking static target lib/librte_member.a 00:00:51.218 [511/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:00:51.218 [512/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:00:51.218 [513/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:00:51.218 [514/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:00:51.218 [515/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:00:51.218 [516/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.218 [517/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:00:51.218 [518/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:00:51.218 [519/705] Linking static target lib/librte_pdcp.a 00:00:51.218 [520/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.480 [521/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:00:51.480 [522/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:00:51.480 [523/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:00:51.480 [524/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:00:51.480 [525/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:00:51.480 [526/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:00:51.480 [527/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:00:51.480 [528/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:00:51.480 [529/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:00:51.480 [530/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:00:51.480 [531/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:00:51.480 [532/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:00:51.480 [533/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:00:51.480 [534/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:51.480 [535/705] Linking static target lib/acl/libavx2_tmp.a 00:00:51.480 [536/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.480 [537/705] Linking static target lib/librte_hash.a 00:00:51.480 [538/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:00:51.480 [539/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:00:51.480 [540/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:00:51.480 [541/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.480 [542/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:00:51.480 [543/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.480 [544/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:00:51.480 [545/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:00:51.480 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:00:51.742 [547/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:00:51.742 [548/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.742 [549/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:00:51.742 [550/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:00:51.742 [551/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:00:51.742 [552/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:00:51.742 [553/705] Linking static target lib/librte_acl.a 00:00:51.742 [554/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:00:51.742 [555/705] Linking static target lib/librte_eventdev.a 00:00:51.742 [556/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:00:51.742 [557/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:00:51.742 [558/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:00:51.742 [559/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:00:51.742 [560/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:00:52.004 [561/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:00:52.004 [562/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:00:52.004 [563/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:00:52.004 [564/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:00:52.004 [565/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.004 [566/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.266 [567/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:00:52.266 [568/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.266 [569/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:00:52.266 [570/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:00:52.266 [571/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:00:52.266 [572/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:00:52.527 [573/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:00:52.527 [574/705] Linking static target lib/librte_ethdev.a 00:00:52.789 [575/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:00:52.789 [576/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:00:52.789 [577/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:00:53.051 [578/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:53.313 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:00:53.575 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:00:53.575 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:00:53.836 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:00:53.836 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:00:53.836 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:53.836 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:00:53.836 [586/705] Linking static target drivers/librte_net_i40e.a 00:00:54.780 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:00:55.041 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:00:55.041 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:00:55.303 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.517 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:00:59.517 [592/705] Linking static target lib/librte_pipeline.a 00:01:00.463 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:00.463 [594/705] Linking static target lib/librte_vhost.a 00:01:00.725 [595/705] Linking target app/dpdk-pdump 00:01:00.725 [596/705] Linking target app/dpdk-test-compress-perf 00:01:00.725 [597/705] Linking target app/dpdk-test-cmdline 00:01:00.725 [598/705] Linking target app/dpdk-test-fib 00:01:00.725 [599/705] Linking target app/dpdk-test-sad 00:01:00.725 [600/705] Linking target app/dpdk-test-dma-perf 00:01:00.725 [601/705] Linking target app/dpdk-test-gpudev 00:01:00.725 [602/705] Linking target app/dpdk-test-security-perf 00:01:00.725 [603/705] Linking target app/dpdk-testpmd 00:01:00.725 [604/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.986 [605/705] Linking target app/dpdk-dumpcap 00:01:00.986 [606/705] Linking target app/dpdk-test-acl 00:01:00.986 [607/705] Linking target app/dpdk-graph 00:01:00.986 [608/705] Linking target app/dpdk-test-flow-perf 00:01:00.986 [609/705] Linking target app/dpdk-proc-info 00:01:00.986 [610/705] Linking target app/dpdk-test-regex 00:01:00.986 [611/705] Linking target app/dpdk-test-bbdev 00:01:00.986 [612/705] Linking target app/dpdk-test-mldev 00:01:00.986 [613/705] Linking target app/dpdk-test-pipeline 00:01:00.986 [614/705] Linking target app/dpdk-test-crypto-perf 00:01:00.986 [615/705] Linking target app/dpdk-test-eventdev 00:01:00.986 [616/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.986 [617/705] Linking target lib/librte_eal.so.24.0 00:01:00.986 [618/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:00.986 [619/705] Linking target drivers/librte_bus_vdev.so.24.0 00:01:00.986 [620/705] Linking target lib/librte_ring.so.24.0 00:01:00.986 [621/705] Linking target lib/librte_timer.so.24.0 00:01:00.986 [622/705] Linking target lib/librte_meter.so.24.0 00:01:00.986 [623/705] Linking target lib/librte_jobstats.so.24.0 00:01:00.986 [624/705] Linking target lib/librte_pci.so.24.0 00:01:00.986 [625/705] Linking target lib/librte_cfgfile.so.24.0 00:01:00.986 [626/705] Linking target lib/librte_dmadev.so.24.0 00:01:00.986 [627/705] Linking target lib/librte_stack.so.24.0 00:01:00.986 [628/705] Linking target lib/librte_rawdev.so.24.0 00:01:00.986 [629/705] Linking target lib/librte_acl.so.24.0 00:01:01.247 [630/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:01.247 [631/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:01.247 [632/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:01.247 [633/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:01.247 [634/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:01.247 [635/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:01.247 [636/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:01.247 [637/705] Linking target lib/librte_rcu.so.24.0 00:01:01.247 [638/705] Linking target lib/librte_mempool.so.24.0 00:01:01.247 [639/705] Linking target drivers/librte_bus_pci.so.24.0 00:01:01.509 [640/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:01.509 [641/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:01.509 [642/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:01.509 [643/705] Linking target lib/librte_rib.so.24.0 00:01:01.509 [644/705] Linking target drivers/librte_mempool_ring.so.24.0 00:01:01.509 [645/705] Linking target lib/librte_mbuf.so.24.0 00:01:01.509 [646/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:01.509 [647/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:01.770 [648/705] Linking target lib/librte_fib.so.24.0 00:01:01.770 [649/705] Linking target lib/librte_bbdev.so.24.0 00:01:01.770 [650/705] Linking target lib/librte_distributor.so.24.0 00:01:01.770 [651/705] Linking target lib/librte_net.so.24.0 00:01:01.770 [652/705] Linking target lib/librte_compressdev.so.24.0 00:01:01.770 [653/705] Linking target lib/librte_gpudev.so.24.0 00:01:01.770 [654/705] Linking target lib/librte_cryptodev.so.24.0 00:01:01.770 [655/705] Linking target lib/librte_regexdev.so.24.0 00:01:01.770 [656/705] Linking target lib/librte_sched.so.24.0 00:01:01.770 [657/705] Linking target lib/librte_reorder.so.24.0 00:01:01.771 [658/705] Linking target lib/librte_mldev.so.24.0 00:01:01.771 [659/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:01.771 [660/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:01.771 [661/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:01.771 [662/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:02.032 [663/705] Linking target lib/librte_hash.so.24.0 00:01:02.032 [664/705] Linking target lib/librte_cmdline.so.24.0 00:01:02.032 [665/705] Linking target lib/librte_security.so.24.0 00:01:02.032 [666/705] Linking target lib/librte_ethdev.so.24.0 00:01:02.032 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:02.032 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:02.032 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:02.032 [670/705] Linking target lib/librte_member.so.24.0 00:01:02.032 [671/705] Linking target lib/librte_efd.so.24.0 00:01:02.032 [672/705] Linking target lib/librte_lpm.so.24.0 00:01:02.032 [673/705] Linking target lib/librte_ip_frag.so.24.0 00:01:02.032 [674/705] Linking target lib/librte_ipsec.so.24.0 00:01:02.032 [675/705] Linking target lib/librte_metrics.so.24.0 00:01:02.032 [676/705] Linking target lib/librte_gso.so.24.0 00:01:02.032 [677/705] Linking target lib/librte_pdcp.so.24.0 00:01:02.032 [678/705] Linking target lib/librte_gro.so.24.0 00:01:02.032 [679/705] Linking target lib/librte_pcapng.so.24.0 00:01:02.032 [680/705] Linking target lib/librte_bpf.so.24.0 00:01:02.032 [681/705] Linking target lib/librte_power.so.24.0 00:01:02.032 [682/705] Linking target lib/librte_eventdev.so.24.0 00:01:02.294 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:01:02.295 [684/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:02.295 [685/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:02.295 [686/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:02.295 [687/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:02.295 [688/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:02.295 [689/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:02.295 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:02.295 [691/705] Linking target lib/librte_graph.so.24.0 00:01:02.295 [692/705] Linking target lib/librte_latencystats.so.24.0 00:01:02.295 [693/705] Linking target lib/librte_bitratestats.so.24.0 00:01:02.295 [694/705] Linking target lib/librte_pdump.so.24.0 00:01:02.295 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:01:02.295 [696/705] Linking target lib/librte_port.so.24.0 00:01:02.295 [697/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.557 [698/705] Linking target lib/librte_vhost.so.24.0 00:01:02.557 [699/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:02.557 [700/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:02.557 [701/705] Linking target lib/librte_node.so.24.0 00:01:02.557 [702/705] Linking target lib/librte_table.so.24.0 00:01:02.819 [703/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:04.744 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:04.744 [705/705] Linking target lib/librte_pipeline.so.24.0 00:01:04.744 09:54:50 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:01:04.744 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:04.744 [0/1] Installing files. 00:01:04.744 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.744 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.745 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:04.746 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.747 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.748 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.749 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:04.750 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:04.751 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:04.751 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:04.751 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:04.751 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:04.751 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:04.751 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:05.017 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:05.017 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:05.017 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.017 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:05.017 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.017 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.017 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.017 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.017 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.020 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.021 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:05.022 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:05.022 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:05.022 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:01:05.022 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:05.022 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:05.024 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:05.024 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:05.024 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:05.024 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:05.024 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:05.024 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:05.024 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:05.024 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:05.024 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:05.024 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:05.024 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:05.024 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:05.024 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:05.024 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:05.024 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:05.024 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:05.024 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:05.024 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:05.024 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:05.024 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:05.024 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:05.024 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:05.024 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:05.024 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:05.024 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:05.024 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:05.024 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:05.024 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:05.024 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:05.024 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:05.024 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:05.024 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:05.024 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:05.024 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:05.024 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:05.024 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:05.024 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:05.024 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:05.024 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:05.024 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:05.024 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:05.024 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:05.024 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:05.024 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:05.024 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:05.024 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:05.024 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:05.024 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:05.024 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:05.024 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:05.024 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:05.024 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:05.024 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:05.024 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:05.024 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:05.024 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:05.024 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:05.024 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:05.024 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:05.024 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:05.024 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:05.024 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:05.024 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:05.024 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:05.024 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:05.024 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:05.024 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:05.024 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:05.024 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:05.024 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:05.024 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:05.024 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:05.024 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:05.024 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:05.024 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:05.024 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:05.024 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:05.024 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:05.024 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:05.024 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:05.024 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:05.024 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:05.024 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:05.025 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:05.025 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:05.025 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:05.025 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:05.025 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:05.025 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:05.025 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:05.025 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:05.025 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:05.025 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:05.025 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:05.025 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:05.025 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:05.025 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:05.025 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:05.025 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:05.025 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:05.025 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:05.025 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:05.025 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:05.025 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:05.025 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:05.025 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:05.025 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:05.025 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:05.025 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:05.025 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:05.025 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:05.025 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:05.025 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:05.025 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:05.025 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:05.025 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:05.025 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:05.025 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:05.025 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:05.025 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:05.025 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:05.025 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:05.025 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:05.025 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:05.025 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:05.025 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:05.025 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:05.025 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:05.025 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:05.025 09:54:50 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:01:05.025 09:54:50 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:05.025 09:54:50 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:01:05.025 09:54:50 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.025 00:01:05.025 real 0m24.301s 00:01:05.025 user 7m12.520s 00:01:05.025 sys 3m15.004s 00:01:05.025 09:54:50 build_native_dpdk -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:05.025 09:54:50 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:01:05.025 ************************************ 00:01:05.025 END TEST build_native_dpdk 00:01:05.025 ************************************ 00:01:05.288 09:54:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.288 09:54:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.288 09:54:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.288 09:54:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.288 09:54:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.288 09:54:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.288 09:54:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.288 09:54:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:05.288 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:05.550 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:05.550 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:05.551 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.813 Using 'verbs' RDMA provider 00:01:21.762 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:34.008 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:34.008 Creating mk/config.mk...done. 00:01:34.008 Creating mk/cc.flags.mk...done. 00:01:34.008 Type 'make' to build. 00:01:34.008 09:55:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:34.008 09:55:19 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:34.008 09:55:19 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:34.008 09:55:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.008 ************************************ 00:01:34.008 START TEST make 00:01:34.008 ************************************ 00:01:34.008 09:55:19 make -- common/autotest_common.sh@1122 -- $ make -j144 00:01:34.008 make[1]: Nothing to be done for 'all'. 00:01:35.404 The Meson build system 00:01:35.404 Version: 1.3.1 00:01:35.404 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:35.404 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.404 Build type: native build 00:01:35.404 Project name: libvfio-user 00:01:35.404 Project version: 0.0.1 00:01:35.404 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:35.404 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:35.404 Host machine cpu family: x86_64 00:01:35.404 Host machine cpu: x86_64 00:01:35.404 Run-time dependency threads found: YES 00:01:35.404 Library dl found: YES 00:01:35.404 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:35.404 Run-time dependency json-c found: YES 0.17 00:01:35.404 Run-time dependency cmocka found: YES 1.1.7 00:01:35.404 Program pytest-3 found: NO 00:01:35.404 Program flake8 found: NO 00:01:35.404 Program misspell-fixer found: NO 00:01:35.404 Program restructuredtext-lint found: NO 00:01:35.404 Program valgrind found: YES (/usr/bin/valgrind) 00:01:35.404 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.404 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.404 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.404 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:35.404 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:35.404 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:35.404 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:35.404 Build targets in project: 8 00:01:35.404 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:35.404 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:35.404 00:01:35.404 libvfio-user 0.0.1 00:01:35.404 00:01:35.404 User defined options 00:01:35.404 buildtype : debug 00:01:35.404 default_library: shared 00:01:35.404 libdir : /usr/local/lib 00:01:35.404 00:01:35.404 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.404 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:35.664 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:35.664 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:35.664 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:35.664 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:35.664 [5/37] Compiling C object samples/null.p/null.c.o 00:01:35.665 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:35.665 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:35.665 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:35.665 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:35.665 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:35.665 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:35.665 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:35.665 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:35.665 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:35.665 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:35.665 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:35.665 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:35.665 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:35.665 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:35.665 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:35.665 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:35.665 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:35.665 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:35.665 [24/37] Compiling C object samples/client.p/client.c.o 00:01:35.665 [25/37] Compiling C object samples/server.p/server.c.o 00:01:35.665 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:35.665 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:35.665 [28/37] Linking target samples/client 00:01:35.665 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:35.665 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:35.665 [31/37] Linking target test/unit_tests 00:01:35.926 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:35.926 [33/37] Linking target samples/server 00:01:35.926 [34/37] Linking target samples/null 00:01:35.926 [35/37] Linking target samples/lspci 00:01:35.926 [36/37] Linking target samples/gpio-pci-idio-16 00:01:35.926 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:35.926 INFO: autodetecting backend as ninja 00:01:35.926 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:35.926 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.189 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:36.189 ninja: no work to do. 00:01:44.348 CC lib/log/log.o 00:01:44.348 CC lib/log/log_flags.o 00:01:44.348 CC lib/log/log_deprecated.o 00:01:44.348 CC lib/ut/ut.o 00:01:44.348 CC lib/ut_mock/mock.o 00:01:44.348 LIB libspdk_ut_mock.a 00:01:44.348 LIB libspdk_log.a 00:01:44.348 LIB libspdk_ut.a 00:01:44.348 SO libspdk_ut_mock.so.6.0 00:01:44.348 SO libspdk_log.so.7.0 00:01:44.348 SO libspdk_ut.so.2.0 00:01:44.348 SYMLINK libspdk_ut_mock.so 00:01:44.348 SYMLINK libspdk_ut.so 00:01:44.348 SYMLINK libspdk_log.so 00:01:44.348 CXX lib/trace_parser/trace.o 00:01:44.348 CC lib/dma/dma.o 00:01:44.348 CC lib/ioat/ioat.o 00:01:44.348 CC lib/util/base64.o 00:01:44.348 CC lib/util/bit_array.o 00:01:44.348 CC lib/util/cpuset.o 00:01:44.348 CC lib/util/crc16.o 00:01:44.348 CC lib/util/crc32.o 00:01:44.348 CC lib/util/crc32c.o 00:01:44.348 CC lib/util/crc32_ieee.o 00:01:44.348 CC lib/util/crc64.o 00:01:44.348 CC lib/util/dif.o 00:01:44.348 CC lib/util/fd.o 00:01:44.348 CC lib/util/file.o 00:01:44.348 CC lib/util/hexlify.o 00:01:44.348 CC lib/util/iov.o 00:01:44.348 CC lib/util/math.o 00:01:44.348 CC lib/util/pipe.o 00:01:44.348 CC lib/util/strerror_tls.o 00:01:44.348 CC lib/util/string.o 00:01:44.348 CC lib/util/uuid.o 00:01:44.348 CC lib/util/fd_group.o 00:01:44.348 CC lib/util/xor.o 00:01:44.348 CC lib/util/zipf.o 00:01:44.610 CC lib/vfio_user/host/vfio_user_pci.o 00:01:44.610 CC lib/vfio_user/host/vfio_user.o 00:01:44.610 LIB libspdk_dma.a 00:01:44.610 SO libspdk_dma.so.4.0 00:01:44.610 LIB libspdk_ioat.a 00:01:44.610 SYMLINK libspdk_dma.so 00:01:44.610 SO libspdk_ioat.so.7.0 00:01:44.873 SYMLINK libspdk_ioat.so 00:01:44.873 LIB libspdk_vfio_user.a 00:01:44.873 SO libspdk_vfio_user.so.5.0 00:01:44.873 LIB libspdk_util.a 00:01:44.873 SYMLINK libspdk_vfio_user.so 00:01:44.873 SO libspdk_util.so.9.0 00:01:45.136 SYMLINK libspdk_util.so 00:01:45.136 LIB libspdk_trace_parser.a 00:01:45.136 SO libspdk_trace_parser.so.5.0 00:01:45.398 SYMLINK libspdk_trace_parser.so 00:01:45.398 CC lib/json/json_parse.o 00:01:45.398 CC lib/json/json_util.o 00:01:45.398 CC lib/json/json_write.o 00:01:45.398 CC lib/rdma/common.o 00:01:45.398 CC lib/rdma/rdma_verbs.o 00:01:45.398 CC lib/idxd/idxd.o 00:01:45.398 CC lib/idxd/idxd_user.o 00:01:45.398 CC lib/conf/conf.o 00:01:45.398 CC lib/vmd/vmd.o 00:01:45.398 CC lib/env_dpdk/env.o 00:01:45.398 CC lib/vmd/led.o 00:01:45.398 CC lib/env_dpdk/memory.o 00:01:45.398 CC lib/env_dpdk/pci.o 00:01:45.398 CC lib/env_dpdk/init.o 00:01:45.398 CC lib/env_dpdk/threads.o 00:01:45.398 CC lib/env_dpdk/pci_ioat.o 00:01:45.398 CC lib/env_dpdk/pci_virtio.o 00:01:45.398 CC lib/env_dpdk/pci_vmd.o 00:01:45.398 CC lib/env_dpdk/pci_idxd.o 00:01:45.398 CC lib/env_dpdk/pci_event.o 00:01:45.398 CC lib/env_dpdk/sigbus_handler.o 00:01:45.398 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:45.398 CC lib/env_dpdk/pci_dpdk.o 00:01:45.399 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:45.661 LIB libspdk_conf.a 00:01:45.661 LIB libspdk_json.a 00:01:45.661 LIB libspdk_rdma.a 00:01:45.661 SO libspdk_conf.so.6.0 00:01:45.661 SO libspdk_json.so.6.0 00:01:45.924 SO libspdk_rdma.so.6.0 00:01:45.924 SYMLINK libspdk_conf.so 00:01:45.924 SYMLINK libspdk_json.so 00:01:45.924 SYMLINK libspdk_rdma.so 00:01:45.924 LIB libspdk_idxd.a 00:01:45.924 SO libspdk_idxd.so.12.0 00:01:45.924 LIB libspdk_vmd.a 00:01:46.186 SO libspdk_vmd.so.6.0 00:01:46.186 SYMLINK libspdk_idxd.so 00:01:46.186 SYMLINK libspdk_vmd.so 00:01:46.186 CC lib/jsonrpc/jsonrpc_server.o 00:01:46.186 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:46.186 CC lib/jsonrpc/jsonrpc_client.o 00:01:46.186 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:46.449 LIB libspdk_jsonrpc.a 00:01:46.449 SO libspdk_jsonrpc.so.6.0 00:01:46.449 SYMLINK libspdk_jsonrpc.so 00:01:46.711 LIB libspdk_env_dpdk.a 00:01:46.711 SO libspdk_env_dpdk.so.14.0 00:01:46.974 CC lib/rpc/rpc.o 00:01:46.974 SYMLINK libspdk_env_dpdk.so 00:01:46.974 LIB libspdk_rpc.a 00:01:47.236 SO libspdk_rpc.so.6.0 00:01:47.236 SYMLINK libspdk_rpc.so 00:01:47.503 CC lib/notify/notify.o 00:01:47.503 CC lib/notify/notify_rpc.o 00:01:47.503 CC lib/keyring/keyring.o 00:01:47.503 CC lib/keyring/keyring_rpc.o 00:01:47.503 CC lib/trace/trace.o 00:01:47.503 CC lib/trace/trace_flags.o 00:01:47.503 CC lib/trace/trace_rpc.o 00:01:47.796 LIB libspdk_notify.a 00:01:47.796 SO libspdk_notify.so.6.0 00:01:47.796 LIB libspdk_keyring.a 00:01:47.797 LIB libspdk_trace.a 00:01:47.797 SO libspdk_keyring.so.1.0 00:01:47.797 SYMLINK libspdk_notify.so 00:01:47.797 SO libspdk_trace.so.10.0 00:01:47.797 SYMLINK libspdk_keyring.so 00:01:48.063 SYMLINK libspdk_trace.so 00:01:48.325 CC lib/thread/thread.o 00:01:48.325 CC lib/thread/iobuf.o 00:01:48.325 CC lib/sock/sock.o 00:01:48.325 CC lib/sock/sock_rpc.o 00:01:48.589 LIB libspdk_sock.a 00:01:48.589 SO libspdk_sock.so.9.0 00:01:48.851 SYMLINK libspdk_sock.so 00:01:49.112 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:49.112 CC lib/nvme/nvme_ctrlr.o 00:01:49.112 CC lib/nvme/nvme_fabric.o 00:01:49.112 CC lib/nvme/nvme_ns_cmd.o 00:01:49.112 CC lib/nvme/nvme_ns.o 00:01:49.112 CC lib/nvme/nvme_pcie_common.o 00:01:49.112 CC lib/nvme/nvme_pcie.o 00:01:49.112 CC lib/nvme/nvme_qpair.o 00:01:49.113 CC lib/nvme/nvme.o 00:01:49.113 CC lib/nvme/nvme_quirks.o 00:01:49.113 CC lib/nvme/nvme_transport.o 00:01:49.113 CC lib/nvme/nvme_discovery.o 00:01:49.113 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:49.113 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:49.113 CC lib/nvme/nvme_tcp.o 00:01:49.113 CC lib/nvme/nvme_opal.o 00:01:49.113 CC lib/nvme/nvme_io_msg.o 00:01:49.113 CC lib/nvme/nvme_poll_group.o 00:01:49.113 CC lib/nvme/nvme_zns.o 00:01:49.113 CC lib/nvme/nvme_stubs.o 00:01:49.113 CC lib/nvme/nvme_auth.o 00:01:49.113 CC lib/nvme/nvme_cuse.o 00:01:49.113 CC lib/nvme/nvme_vfio_user.o 00:01:49.113 CC lib/nvme/nvme_rdma.o 00:01:49.686 LIB libspdk_thread.a 00:01:49.686 SO libspdk_thread.so.10.0 00:01:49.686 SYMLINK libspdk_thread.so 00:01:49.949 CC lib/init/json_config.o 00:01:49.949 CC lib/init/subsystem.o 00:01:49.949 CC lib/init/subsystem_rpc.o 00:01:49.949 CC lib/init/rpc.o 00:01:49.949 CC lib/vfu_tgt/tgt_endpoint.o 00:01:49.949 CC lib/vfu_tgt/tgt_rpc.o 00:01:49.949 CC lib/virtio/virtio.o 00:01:49.949 CC lib/blob/blobstore.o 00:01:49.949 CC lib/virtio/virtio_vhost_user.o 00:01:49.949 CC lib/accel/accel.o 00:01:49.949 CC lib/virtio/virtio_vfio_user.o 00:01:49.949 CC lib/blob/request.o 00:01:49.949 CC lib/accel/accel_sw.o 00:01:49.949 CC lib/blob/zeroes.o 00:01:49.949 CC lib/accel/accel_rpc.o 00:01:49.949 CC lib/virtio/virtio_pci.o 00:01:49.949 CC lib/blob/blob_bs_dev.o 00:01:50.212 LIB libspdk_init.a 00:01:50.212 SO libspdk_init.so.5.0 00:01:50.212 LIB libspdk_vfu_tgt.a 00:01:50.212 LIB libspdk_virtio.a 00:01:50.474 SO libspdk_vfu_tgt.so.3.0 00:01:50.474 SYMLINK libspdk_init.so 00:01:50.474 SO libspdk_virtio.so.7.0 00:01:50.474 SYMLINK libspdk_vfu_tgt.so 00:01:50.474 SYMLINK libspdk_virtio.so 00:01:50.737 CC lib/event/app.o 00:01:50.737 CC lib/event/reactor.o 00:01:50.737 CC lib/event/log_rpc.o 00:01:50.737 CC lib/event/app_rpc.o 00:01:50.737 CC lib/event/scheduler_static.o 00:01:50.999 LIB libspdk_accel.a 00:01:50.999 SO libspdk_accel.so.15.0 00:01:50.999 LIB libspdk_nvme.a 00:01:50.999 SYMLINK libspdk_accel.so 00:01:50.999 SO libspdk_nvme.so.13.0 00:01:50.999 LIB libspdk_event.a 00:01:51.261 SO libspdk_event.so.13.0 00:01:51.261 SYMLINK libspdk_event.so 00:01:51.261 CC lib/bdev/bdev.o 00:01:51.261 CC lib/bdev/bdev_rpc.o 00:01:51.261 CC lib/bdev/bdev_zone.o 00:01:51.261 CC lib/bdev/part.o 00:01:51.261 CC lib/bdev/scsi_nvme.o 00:01:51.261 SYMLINK libspdk_nvme.so 00:01:52.649 LIB libspdk_blob.a 00:01:52.649 SO libspdk_blob.so.11.0 00:01:52.649 SYMLINK libspdk_blob.so 00:01:52.910 CC lib/lvol/lvol.o 00:01:52.910 CC lib/blobfs/blobfs.o 00:01:52.910 CC lib/blobfs/tree.o 00:01:53.483 LIB libspdk_bdev.a 00:01:53.744 SO libspdk_bdev.so.15.0 00:01:53.744 LIB libspdk_blobfs.a 00:01:53.744 SYMLINK libspdk_bdev.so 00:01:53.744 SO libspdk_blobfs.so.10.0 00:01:53.744 LIB libspdk_lvol.a 00:01:53.744 SO libspdk_lvol.so.10.0 00:01:53.744 SYMLINK libspdk_blobfs.so 00:01:54.006 SYMLINK libspdk_lvol.so 00:01:54.006 CC lib/nbd/nbd_rpc.o 00:01:54.006 CC lib/nbd/nbd.o 00:01:54.006 CC lib/ftl/ftl_core.o 00:01:54.006 CC lib/ftl/ftl_init.o 00:01:54.006 CC lib/scsi/dev.o 00:01:54.006 CC lib/nvmf/ctrlr.o 00:01:54.006 CC lib/ftl/ftl_layout.o 00:01:54.006 CC lib/scsi/lun.o 00:01:54.006 CC lib/nvmf/ctrlr_discovery.o 00:01:54.006 CC lib/ftl/ftl_debug.o 00:01:54.006 CC lib/scsi/port.o 00:01:54.006 CC lib/nvmf/ctrlr_bdev.o 00:01:54.006 CC lib/ftl/ftl_io.o 00:01:54.006 CC lib/scsi/scsi.o 00:01:54.006 CC lib/nvmf/subsystem.o 00:01:54.006 CC lib/ftl/ftl_sb.o 00:01:54.006 CC lib/scsi/scsi_bdev.o 00:01:54.006 CC lib/nvmf/nvmf.o 00:01:54.006 CC lib/ftl/ftl_l2p.o 00:01:54.006 CC lib/scsi/scsi_pr.o 00:01:54.006 CC lib/nvmf/nvmf_rpc.o 00:01:54.006 CC lib/ftl/ftl_l2p_flat.o 00:01:54.006 CC lib/ublk/ublk.o 00:01:54.006 CC lib/scsi/scsi_rpc.o 00:01:54.006 CC lib/nvmf/transport.o 00:01:54.006 CC lib/scsi/task.o 00:01:54.006 CC lib/ftl/ftl_nv_cache.o 00:01:54.006 CC lib/ublk/ublk_rpc.o 00:01:54.006 CC lib/nvmf/tcp.o 00:01:54.006 CC lib/ftl/ftl_band.o 00:01:54.006 CC lib/nvmf/stubs.o 00:01:54.006 CC lib/nvmf/mdns_server.o 00:01:54.006 CC lib/ftl/ftl_band_ops.o 00:01:54.006 CC lib/nvmf/vfio_user.o 00:01:54.006 CC lib/ftl/ftl_writer.o 00:01:54.006 CC lib/nvmf/rdma.o 00:01:54.006 CC lib/nvmf/auth.o 00:01:54.006 CC lib/ftl/ftl_rq.o 00:01:54.006 CC lib/ftl/ftl_reloc.o 00:01:54.006 CC lib/ftl/ftl_l2p_cache.o 00:01:54.006 CC lib/ftl/ftl_p2l.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:54.006 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:54.267 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:54.267 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:54.267 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:54.267 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:54.267 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:54.267 CC lib/ftl/utils/ftl_md.o 00:01:54.267 CC lib/ftl/utils/ftl_conf.o 00:01:54.267 CC lib/ftl/utils/ftl_mempool.o 00:01:54.267 CC lib/ftl/utils/ftl_bitmap.o 00:01:54.267 CC lib/ftl/utils/ftl_property.o 00:01:54.267 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:54.267 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:54.267 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:54.267 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:54.267 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:54.267 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:54.267 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:54.267 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:54.267 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:54.267 CC lib/ftl/base/ftl_base_dev.o 00:01:54.267 CC lib/ftl/ftl_trace.o 00:01:54.267 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:54.267 CC lib/ftl/base/ftl_base_bdev.o 00:01:54.570 LIB libspdk_nbd.a 00:01:54.570 SO libspdk_nbd.so.7.0 00:01:54.570 SYMLINK libspdk_nbd.so 00:01:54.832 LIB libspdk_scsi.a 00:01:54.832 LIB libspdk_ublk.a 00:01:54.832 SO libspdk_scsi.so.9.0 00:01:54.832 SO libspdk_ublk.so.3.0 00:01:54.832 SYMLINK libspdk_ublk.so 00:01:55.094 SYMLINK libspdk_scsi.so 00:01:55.094 LIB libspdk_ftl.a 00:01:55.094 SO libspdk_ftl.so.9.0 00:01:55.356 CC lib/iscsi/conn.o 00:01:55.356 CC lib/iscsi/iscsi.o 00:01:55.356 CC lib/iscsi/init_grp.o 00:01:55.356 CC lib/iscsi/md5.o 00:01:55.356 CC lib/iscsi/portal_grp.o 00:01:55.356 CC lib/iscsi/param.o 00:01:55.356 CC lib/iscsi/iscsi_subsystem.o 00:01:55.356 CC lib/iscsi/iscsi_rpc.o 00:01:55.356 CC lib/iscsi/task.o 00:01:55.356 CC lib/iscsi/tgt_node.o 00:01:55.356 CC lib/vhost/vhost.o 00:01:55.356 CC lib/vhost/vhost_rpc.o 00:01:55.356 CC lib/vhost/vhost_scsi.o 00:01:55.356 CC lib/vhost/vhost_blk.o 00:01:55.356 CC lib/vhost/rte_vhost_user.o 00:01:55.617 SYMLINK libspdk_ftl.so 00:01:56.191 LIB libspdk_nvmf.a 00:01:56.191 SO libspdk_nvmf.so.18.0 00:01:56.191 LIB libspdk_vhost.a 00:01:56.452 SO libspdk_vhost.so.8.0 00:01:56.452 SYMLINK libspdk_nvmf.so 00:01:56.452 SYMLINK libspdk_vhost.so 00:01:56.452 LIB libspdk_iscsi.a 00:01:56.452 SO libspdk_iscsi.so.8.0 00:01:56.713 SYMLINK libspdk_iscsi.so 00:01:57.288 CC module/vfu_device/vfu_virtio.o 00:01:57.288 CC module/vfu_device/vfu_virtio_blk.o 00:01:57.288 CC module/vfu_device/vfu_virtio_scsi.o 00:01:57.288 CC module/vfu_device/vfu_virtio_rpc.o 00:01:57.288 CC module/env_dpdk/env_dpdk_rpc.o 00:01:57.288 LIB libspdk_env_dpdk_rpc.a 00:01:57.288 CC module/blob/bdev/blob_bdev.o 00:01:57.288 CC module/keyring/file/keyring.o 00:01:57.288 CC module/keyring/file/keyring_rpc.o 00:01:57.288 SO libspdk_env_dpdk_rpc.so.6.0 00:01:57.288 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:57.288 CC module/scheduler/gscheduler/gscheduler.o 00:01:57.288 CC module/accel/dsa/accel_dsa.o 00:01:57.288 CC module/sock/posix/posix.o 00:01:57.288 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:57.288 CC module/accel/dsa/accel_dsa_rpc.o 00:01:57.288 CC module/accel/iaa/accel_iaa.o 00:01:57.288 CC module/accel/error/accel_error.o 00:01:57.288 CC module/accel/error/accel_error_rpc.o 00:01:57.288 CC module/accel/iaa/accel_iaa_rpc.o 00:01:57.288 CC module/accel/ioat/accel_ioat.o 00:01:57.288 CC module/accel/ioat/accel_ioat_rpc.o 00:01:57.549 SYMLINK libspdk_env_dpdk_rpc.so 00:01:57.549 LIB libspdk_keyring_file.a 00:01:57.549 LIB libspdk_scheduler_gscheduler.a 00:01:57.549 LIB libspdk_scheduler_dpdk_governor.a 00:01:57.549 LIB libspdk_scheduler_dynamic.a 00:01:57.549 SO libspdk_keyring_file.so.1.0 00:01:57.549 LIB libspdk_accel_ioat.a 00:01:57.549 SO libspdk_scheduler_gscheduler.so.4.0 00:01:57.549 LIB libspdk_accel_error.a 00:01:57.549 LIB libspdk_accel_iaa.a 00:01:57.549 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:57.549 SO libspdk_scheduler_dynamic.so.4.0 00:01:57.549 LIB libspdk_accel_dsa.a 00:01:57.549 LIB libspdk_blob_bdev.a 00:01:57.549 SO libspdk_accel_ioat.so.6.0 00:01:57.549 SO libspdk_accel_error.so.2.0 00:01:57.812 SYMLINK libspdk_scheduler_gscheduler.so 00:01:57.812 SO libspdk_accel_iaa.so.3.0 00:01:57.812 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:57.812 SO libspdk_accel_dsa.so.5.0 00:01:57.812 SYMLINK libspdk_keyring_file.so 00:01:57.812 SO libspdk_blob_bdev.so.11.0 00:01:57.812 SYMLINK libspdk_scheduler_dynamic.so 00:01:57.812 SYMLINK libspdk_accel_ioat.so 00:01:57.812 SYMLINK libspdk_accel_error.so 00:01:57.812 SYMLINK libspdk_accel_iaa.so 00:01:57.812 SYMLINK libspdk_accel_dsa.so 00:01:57.812 SYMLINK libspdk_blob_bdev.so 00:01:57.812 LIB libspdk_vfu_device.a 00:01:57.812 SO libspdk_vfu_device.so.3.0 00:01:57.812 SYMLINK libspdk_vfu_device.so 00:01:58.075 LIB libspdk_sock_posix.a 00:01:58.075 SO libspdk_sock_posix.so.6.0 00:01:58.337 SYMLINK libspdk_sock_posix.so 00:01:58.337 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:58.337 CC module/bdev/delay/vbdev_delay.o 00:01:58.337 CC module/bdev/nvme/bdev_nvme.o 00:01:58.337 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:58.337 CC module/bdev/nvme/bdev_mdns_client.o 00:01:58.337 CC module/bdev/nvme/nvme_rpc.o 00:01:58.337 CC module/bdev/split/vbdev_split_rpc.o 00:01:58.337 CC module/bdev/nvme/vbdev_opal.o 00:01:58.337 CC module/bdev/split/vbdev_split.o 00:01:58.337 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:58.337 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:58.337 CC module/bdev/aio/bdev_aio.o 00:01:58.337 CC module/bdev/malloc/bdev_malloc.o 00:01:58.337 CC module/bdev/ftl/bdev_ftl.o 00:01:58.337 CC module/bdev/error/vbdev_error.o 00:01:58.337 CC module/bdev/error/vbdev_error_rpc.o 00:01:58.337 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:58.337 CC module/bdev/aio/bdev_aio_rpc.o 00:01:58.337 CC module/bdev/lvol/vbdev_lvol.o 00:01:58.337 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:58.337 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:58.337 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:58.337 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:58.337 CC module/bdev/raid/bdev_raid.o 00:01:58.337 CC module/bdev/raid/bdev_raid_rpc.o 00:01:58.337 CC module/bdev/gpt/gpt.o 00:01:58.337 CC module/bdev/raid/bdev_raid_sb.o 00:01:58.337 CC module/bdev/gpt/vbdev_gpt.o 00:01:58.337 CC module/blobfs/bdev/blobfs_bdev.o 00:01:58.337 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:58.337 CC module/bdev/raid/raid1.o 00:01:58.337 CC module/bdev/raid/raid0.o 00:01:58.337 CC module/bdev/passthru/vbdev_passthru.o 00:01:58.337 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:58.337 CC module/bdev/raid/concat.o 00:01:58.337 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:58.337 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:58.337 CC module/bdev/iscsi/bdev_iscsi.o 00:01:58.337 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:58.337 CC module/bdev/null/bdev_null.o 00:01:58.337 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:58.337 CC module/bdev/null/bdev_null_rpc.o 00:01:58.598 LIB libspdk_blobfs_bdev.a 00:01:58.598 LIB libspdk_bdev_split.a 00:01:58.598 SO libspdk_blobfs_bdev.so.6.0 00:01:58.598 LIB libspdk_bdev_null.a 00:01:58.598 SO libspdk_bdev_split.so.6.0 00:01:58.598 LIB libspdk_bdev_error.a 00:01:58.598 LIB libspdk_bdev_gpt.a 00:01:58.598 SO libspdk_bdev_null.so.6.0 00:01:58.598 LIB libspdk_bdev_delay.a 00:01:58.598 LIB libspdk_bdev_aio.a 00:01:58.598 LIB libspdk_bdev_ftl.a 00:01:58.598 LIB libspdk_bdev_zone_block.a 00:01:58.599 SO libspdk_bdev_error.so.6.0 00:01:58.599 SYMLINK libspdk_bdev_split.so 00:01:58.599 LIB libspdk_bdev_passthru.a 00:01:58.599 SYMLINK libspdk_blobfs_bdev.so 00:01:58.599 LIB libspdk_bdev_malloc.a 00:01:58.599 SO libspdk_bdev_gpt.so.6.0 00:01:58.599 SO libspdk_bdev_delay.so.6.0 00:01:58.599 SO libspdk_bdev_zone_block.so.6.0 00:01:58.599 SO libspdk_bdev_ftl.so.6.0 00:01:58.599 SO libspdk_bdev_aio.so.6.0 00:01:58.861 SYMLINK libspdk_bdev_null.so 00:01:58.861 SO libspdk_bdev_malloc.so.6.0 00:01:58.861 SO libspdk_bdev_passthru.so.6.0 00:01:58.861 LIB libspdk_bdev_iscsi.a 00:01:58.861 SYMLINK libspdk_bdev_error.so 00:01:58.861 SYMLINK libspdk_bdev_delay.so 00:01:58.861 SYMLINK libspdk_bdev_gpt.so 00:01:58.861 SO libspdk_bdev_iscsi.so.6.0 00:01:58.861 SYMLINK libspdk_bdev_ftl.so 00:01:58.861 SYMLINK libspdk_bdev_zone_block.so 00:01:58.861 SYMLINK libspdk_bdev_aio.so 00:01:58.861 SYMLINK libspdk_bdev_malloc.so 00:01:58.861 SYMLINK libspdk_bdev_passthru.so 00:01:58.861 LIB libspdk_bdev_lvol.a 00:01:58.861 SYMLINK libspdk_bdev_iscsi.so 00:01:58.861 LIB libspdk_bdev_virtio.a 00:01:58.861 SO libspdk_bdev_lvol.so.6.0 00:01:58.861 SO libspdk_bdev_virtio.so.6.0 00:01:58.861 SYMLINK libspdk_bdev_lvol.so 00:01:59.123 SYMLINK libspdk_bdev_virtio.so 00:01:59.123 LIB libspdk_bdev_raid.a 00:01:59.123 SO libspdk_bdev_raid.so.6.0 00:01:59.385 SYMLINK libspdk_bdev_raid.so 00:02:00.330 LIB libspdk_bdev_nvme.a 00:02:00.330 SO libspdk_bdev_nvme.so.7.0 00:02:00.330 SYMLINK libspdk_bdev_nvme.so 00:02:01.273 CC module/event/subsystems/iobuf/iobuf.o 00:02:01.273 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:01.273 CC module/event/subsystems/sock/sock.o 00:02:01.273 CC module/event/subsystems/scheduler/scheduler.o 00:02:01.273 CC module/event/subsystems/vmd/vmd.o 00:02:01.273 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:01.273 CC module/event/subsystems/keyring/keyring.o 00:02:01.273 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:01.273 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:01.273 LIB libspdk_event_keyring.a 00:02:01.273 LIB libspdk_event_vmd.a 00:02:01.273 LIB libspdk_event_iobuf.a 00:02:01.273 LIB libspdk_event_sock.a 00:02:01.273 SO libspdk_event_keyring.so.1.0 00:02:01.273 LIB libspdk_event_scheduler.a 00:02:01.273 LIB libspdk_event_vfu_tgt.a 00:02:01.273 LIB libspdk_event_vhost_blk.a 00:02:01.273 SO libspdk_event_sock.so.5.0 00:02:01.273 SO libspdk_event_vmd.so.6.0 00:02:01.273 SO libspdk_event_iobuf.so.3.0 00:02:01.273 SO libspdk_event_scheduler.so.4.0 00:02:01.273 SO libspdk_event_vfu_tgt.so.3.0 00:02:01.273 SO libspdk_event_vhost_blk.so.3.0 00:02:01.273 SYMLINK libspdk_event_keyring.so 00:02:01.273 SYMLINK libspdk_event_sock.so 00:02:01.273 SYMLINK libspdk_event_vmd.so 00:02:01.273 SYMLINK libspdk_event_iobuf.so 00:02:01.273 SYMLINK libspdk_event_vfu_tgt.so 00:02:01.273 SYMLINK libspdk_event_scheduler.so 00:02:01.535 SYMLINK libspdk_event_vhost_blk.so 00:02:01.797 CC module/event/subsystems/accel/accel.o 00:02:01.797 LIB libspdk_event_accel.a 00:02:02.059 SO libspdk_event_accel.so.6.0 00:02:02.059 SYMLINK libspdk_event_accel.so 00:02:02.325 CC module/event/subsystems/bdev/bdev.o 00:02:02.629 LIB libspdk_event_bdev.a 00:02:02.629 SO libspdk_event_bdev.so.6.0 00:02:02.629 SYMLINK libspdk_event_bdev.so 00:02:02.891 CC module/event/subsystems/scsi/scsi.o 00:02:02.891 CC module/event/subsystems/ublk/ublk.o 00:02:02.891 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:02.891 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:02.891 CC module/event/subsystems/nbd/nbd.o 00:02:03.154 LIB libspdk_event_nbd.a 00:02:03.154 LIB libspdk_event_ublk.a 00:02:03.154 LIB libspdk_event_scsi.a 00:02:03.154 SO libspdk_event_nbd.so.6.0 00:02:03.154 SO libspdk_event_ublk.so.3.0 00:02:03.154 SO libspdk_event_scsi.so.6.0 00:02:03.154 LIB libspdk_event_nvmf.a 00:02:03.154 SYMLINK libspdk_event_nbd.so 00:02:03.154 SYMLINK libspdk_event_scsi.so 00:02:03.154 SYMLINK libspdk_event_ublk.so 00:02:03.154 SO libspdk_event_nvmf.so.6.0 00:02:03.416 SYMLINK libspdk_event_nvmf.so 00:02:03.679 CC module/event/subsystems/iscsi/iscsi.o 00:02:03.679 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:03.679 LIB libspdk_event_vhost_scsi.a 00:02:03.941 LIB libspdk_event_iscsi.a 00:02:03.941 SO libspdk_event_vhost_scsi.so.3.0 00:02:03.941 SO libspdk_event_iscsi.so.6.0 00:02:03.941 SYMLINK libspdk_event_iscsi.so 00:02:03.941 SYMLINK libspdk_event_vhost_scsi.so 00:02:04.205 SO libspdk.so.6.0 00:02:04.205 SYMLINK libspdk.so 00:02:04.467 CXX app/trace/trace.o 00:02:04.467 CC app/spdk_lspci/spdk_lspci.o 00:02:04.467 CC app/trace_record/trace_record.o 00:02:04.467 CC app/spdk_nvme_perf/perf.o 00:02:04.467 CC app/spdk_top/spdk_top.o 00:02:04.467 CC test/rpc_client/rpc_client_test.o 00:02:04.467 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.467 TEST_HEADER include/spdk/accel_module.h 00:02:04.467 CC app/spdk_nvme_identify/identify.o 00:02:04.467 TEST_HEADER include/spdk/accel.h 00:02:04.467 TEST_HEADER include/spdk/barrier.h 00:02:04.467 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:04.467 TEST_HEADER include/spdk/bdev.h 00:02:04.742 TEST_HEADER include/spdk/assert.h 00:02:04.742 TEST_HEADER include/spdk/base64.h 00:02:04.742 TEST_HEADER include/spdk/bit_array.h 00:02:04.742 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.742 TEST_HEADER include/spdk/bit_pool.h 00:02:04.742 TEST_HEADER include/spdk/bdev_module.h 00:02:04.742 CC app/nvmf_tgt/nvmf_main.o 00:02:04.742 TEST_HEADER include/spdk/blobfs.h 00:02:04.742 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.742 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.742 TEST_HEADER include/spdk/blob.h 00:02:04.742 TEST_HEADER include/spdk/cpuset.h 00:02:04.742 CC app/spdk_tgt/spdk_tgt.o 00:02:04.742 TEST_HEADER include/spdk/conf.h 00:02:04.742 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.742 TEST_HEADER include/spdk/config.h 00:02:04.742 TEST_HEADER include/spdk/crc16.h 00:02:04.742 TEST_HEADER include/spdk/dma.h 00:02:04.742 TEST_HEADER include/spdk/dif.h 00:02:04.742 TEST_HEADER include/spdk/crc32.h 00:02:04.742 TEST_HEADER include/spdk/crc64.h 00:02:04.742 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.742 TEST_HEADER include/spdk/endian.h 00:02:04.742 TEST_HEADER include/spdk/env.h 00:02:04.742 TEST_HEADER include/spdk/event.h 00:02:04.742 TEST_HEADER include/spdk/fd_group.h 00:02:04.742 TEST_HEADER include/spdk/fd.h 00:02:04.742 TEST_HEADER include/spdk/file.h 00:02:04.742 CC app/vhost/vhost.o 00:02:04.742 TEST_HEADER include/spdk/ftl.h 00:02:04.742 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.742 TEST_HEADER include/spdk/histogram_data.h 00:02:04.742 TEST_HEADER include/spdk/hexlify.h 00:02:04.742 CC app/spdk_dd/spdk_dd.o 00:02:04.742 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.742 TEST_HEADER include/spdk/idxd.h 00:02:04.742 TEST_HEADER include/spdk/init.h 00:02:04.742 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.742 TEST_HEADER include/spdk/ioat.h 00:02:04.742 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.742 TEST_HEADER include/spdk/json.h 00:02:04.742 TEST_HEADER include/spdk/keyring_module.h 00:02:04.742 TEST_HEADER include/spdk/keyring.h 00:02:04.742 TEST_HEADER include/spdk/log.h 00:02:04.742 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.742 TEST_HEADER include/spdk/likely.h 00:02:04.742 TEST_HEADER include/spdk/lvol.h 00:02:04.742 TEST_HEADER include/spdk/memory.h 00:02:04.742 TEST_HEADER include/spdk/mmio.h 00:02:04.742 TEST_HEADER include/spdk/nbd.h 00:02:04.742 TEST_HEADER include/spdk/notify.h 00:02:04.742 TEST_HEADER include/spdk/nvme.h 00:02:04.742 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.742 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.742 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.742 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.742 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.742 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.742 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.742 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.742 TEST_HEADER include/spdk/nvmf.h 00:02:04.742 TEST_HEADER include/spdk/opal.h 00:02:04.742 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.742 TEST_HEADER include/spdk/opal_spec.h 00:02:04.742 TEST_HEADER include/spdk/pci_ids.h 00:02:04.742 TEST_HEADER include/spdk/queue.h 00:02:04.742 TEST_HEADER include/spdk/rpc.h 00:02:04.742 TEST_HEADER include/spdk/reduce.h 00:02:04.742 TEST_HEADER include/spdk/pipe.h 00:02:04.742 TEST_HEADER include/spdk/scheduler.h 00:02:04.742 TEST_HEADER include/spdk/scsi.h 00:02:04.742 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.742 TEST_HEADER include/spdk/sock.h 00:02:04.742 TEST_HEADER include/spdk/stdinc.h 00:02:04.742 TEST_HEADER include/spdk/string.h 00:02:04.742 TEST_HEADER include/spdk/trace.h 00:02:04.742 TEST_HEADER include/spdk/thread.h 00:02:04.742 TEST_HEADER include/spdk/trace_parser.h 00:02:04.742 TEST_HEADER include/spdk/ublk.h 00:02:04.742 TEST_HEADER include/spdk/tree.h 00:02:04.742 TEST_HEADER include/spdk/util.h 00:02:04.742 TEST_HEADER include/spdk/uuid.h 00:02:04.742 TEST_HEADER include/spdk/version.h 00:02:04.742 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.742 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.742 TEST_HEADER include/spdk/vhost.h 00:02:04.742 TEST_HEADER include/spdk/xor.h 00:02:04.742 TEST_HEADER include/spdk/vmd.h 00:02:04.742 CXX test/cpp_headers/accel.o 00:02:04.742 TEST_HEADER include/spdk/zipf.h 00:02:04.742 CXX test/cpp_headers/assert.o 00:02:04.742 CXX test/cpp_headers/barrier.o 00:02:04.742 CXX test/cpp_headers/base64.o 00:02:04.742 CXX test/cpp_headers/accel_module.o 00:02:04.742 CXX test/cpp_headers/bdev.o 00:02:04.742 CXX test/cpp_headers/bdev_module.o 00:02:04.742 CXX test/cpp_headers/bdev_zone.o 00:02:04.742 CXX test/cpp_headers/bit_array.o 00:02:04.742 CXX test/cpp_headers/bit_pool.o 00:02:04.742 CXX test/cpp_headers/blob_bdev.o 00:02:04.742 CXX test/cpp_headers/blobfs_bdev.o 00:02:04.742 CXX test/cpp_headers/blobfs.o 00:02:04.742 CXX test/cpp_headers/blob.o 00:02:04.742 CXX test/cpp_headers/conf.o 00:02:04.742 CXX test/cpp_headers/config.o 00:02:04.742 CXX test/cpp_headers/crc16.o 00:02:04.742 CXX test/cpp_headers/cpuset.o 00:02:04.742 CXX test/cpp_headers/crc32.o 00:02:04.742 CXX test/cpp_headers/crc64.o 00:02:04.742 CXX test/cpp_headers/dif.o 00:02:04.742 CXX test/cpp_headers/dma.o 00:02:04.742 CXX test/cpp_headers/endian.o 00:02:04.742 CXX test/cpp_headers/env_dpdk.o 00:02:04.742 CXX test/cpp_headers/env.o 00:02:04.742 CXX test/cpp_headers/event.o 00:02:04.742 CXX test/cpp_headers/fd_group.o 00:02:04.742 CXX test/cpp_headers/fd.o 00:02:04.742 CXX test/cpp_headers/file.o 00:02:04.742 CXX test/cpp_headers/ftl.o 00:02:04.742 CXX test/cpp_headers/gpt_spec.o 00:02:04.742 CXX test/cpp_headers/hexlify.o 00:02:04.742 CXX test/cpp_headers/histogram_data.o 00:02:04.742 CXX test/cpp_headers/idxd_spec.o 00:02:04.742 CXX test/cpp_headers/idxd.o 00:02:04.742 CXX test/cpp_headers/init.o 00:02:04.742 CXX test/cpp_headers/ioat.o 00:02:04.742 CXX test/cpp_headers/ioat_spec.o 00:02:04.742 CXX test/cpp_headers/iscsi_spec.o 00:02:04.742 CXX test/cpp_headers/json.o 00:02:04.742 CXX test/cpp_headers/jsonrpc.o 00:02:04.742 CXX test/cpp_headers/keyring.o 00:02:04.742 CXX test/cpp_headers/keyring_module.o 00:02:04.742 CXX test/cpp_headers/likely.o 00:02:04.742 CXX test/cpp_headers/log.o 00:02:04.742 CXX test/cpp_headers/lvol.o 00:02:04.742 CXX test/cpp_headers/memory.o 00:02:04.742 CXX test/cpp_headers/mmio.o 00:02:04.742 CXX test/cpp_headers/nbd.o 00:02:04.742 CXX test/cpp_headers/nvme_intel.o 00:02:04.742 CXX test/cpp_headers/notify.o 00:02:04.742 CXX test/cpp_headers/nvme.o 00:02:04.742 CXX test/cpp_headers/nvme_ocssd.o 00:02:04.742 CXX test/cpp_headers/nvme_zns.o 00:02:04.742 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:04.742 CXX test/cpp_headers/nvme_spec.o 00:02:04.742 CXX test/cpp_headers/nvmf_cmd.o 00:02:04.742 CXX test/cpp_headers/nvmf.o 00:02:04.742 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:04.742 CXX test/cpp_headers/nvmf_spec.o 00:02:04.742 CXX test/cpp_headers/nvmf_transport.o 00:02:04.742 CXX test/cpp_headers/opal_spec.o 00:02:04.742 CXX test/cpp_headers/opal.o 00:02:04.742 CXX test/cpp_headers/pci_ids.o 00:02:04.742 CC test/app/histogram_perf/histogram_perf.o 00:02:04.742 CXX test/cpp_headers/pipe.o 00:02:04.742 CXX test/cpp_headers/queue.o 00:02:04.742 CXX test/cpp_headers/reduce.o 00:02:04.742 CXX test/cpp_headers/scheduler.o 00:02:04.742 CXX test/cpp_headers/rpc.o 00:02:04.742 CC test/event/reactor/reactor.o 00:02:04.742 CC test/app/jsoncat/jsoncat.o 00:02:04.742 CXX test/cpp_headers/scsi.o 00:02:04.742 CC examples/accel/perf/accel_perf.o 00:02:04.742 CC app/fio/nvme/fio_plugin.o 00:02:04.742 CC test/thread/poller_perf/poller_perf.o 00:02:04.742 CC examples/nvme/hello_world/hello_world.o 00:02:04.742 CC test/event/event_perf/event_perf.o 00:02:04.742 CC examples/bdev/bdevperf/bdevperf.o 00:02:04.742 CC examples/bdev/hello_world/hello_bdev.o 00:02:04.742 CC examples/vmd/led/led.o 00:02:05.017 CC test/app/stub/stub.o 00:02:05.017 CC examples/ioat/verify/verify.o 00:02:05.017 CC test/nvme/overhead/overhead.o 00:02:05.017 CC test/nvme/reserve/reserve.o 00:02:05.017 CC examples/idxd/perf/perf.o 00:02:05.017 CC examples/nvme/abort/abort.o 00:02:05.017 CC examples/ioat/perf/perf.o 00:02:05.017 CC examples/nvme/arbitration/arbitration.o 00:02:05.017 CC test/nvme/fdp/fdp.o 00:02:05.017 CC test/nvme/sgl/sgl.o 00:02:05.017 CC test/nvme/startup/startup.o 00:02:05.017 CC test/event/app_repeat/app_repeat.o 00:02:05.017 CC test/nvme/aer/aer.o 00:02:05.017 CC test/app/bdev_svc/bdev_svc.o 00:02:05.017 CC examples/sock/hello_world/hello_sock.o 00:02:05.017 CC examples/nvme/hotplug/hotplug.o 00:02:05.017 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:05.017 CC test/nvme/connect_stress/connect_stress.o 00:02:05.017 CC test/nvme/cuse/cuse.o 00:02:05.017 CC test/env/pci/pci_ut.o 00:02:05.017 CC test/event/reactor_perf/reactor_perf.o 00:02:05.017 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:05.017 CC test/env/vtophys/vtophys.o 00:02:05.017 CC test/nvme/simple_copy/simple_copy.o 00:02:05.017 CC test/nvme/compliance/nvme_compliance.o 00:02:05.017 CC test/accel/dif/dif.o 00:02:05.017 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:05.017 CC test/nvme/err_injection/err_injection.o 00:02:05.017 CC examples/vmd/lsvmd/lsvmd.o 00:02:05.017 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:05.017 CC test/blobfs/mkfs/mkfs.o 00:02:05.017 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:05.017 CC examples/nvme/reconnect/reconnect.o 00:02:05.017 CC examples/util/zipf/zipf.o 00:02:05.017 CC test/nvme/e2edp/nvme_dp.o 00:02:05.017 CC test/nvme/boot_partition/boot_partition.o 00:02:05.017 CC test/nvme/fused_ordering/fused_ordering.o 00:02:05.017 CC app/fio/bdev/fio_plugin.o 00:02:05.017 CC examples/blob/hello_world/hello_blob.o 00:02:05.017 CC examples/blob/cli/blobcli.o 00:02:05.017 CC test/dma/test_dma/test_dma.o 00:02:05.017 CC test/nvme/reset/reset.o 00:02:05.017 CC test/bdev/bdevio/bdevio.o 00:02:05.017 CC test/event/scheduler/scheduler.o 00:02:05.017 CXX test/cpp_headers/scsi_spec.o 00:02:05.017 CC test/env/memory/memory_ut.o 00:02:05.017 CC examples/nvmf/nvmf/nvmf.o 00:02:05.017 CC examples/thread/thread/thread_ex.o 00:02:05.303 LINK spdk_lspci 00:02:05.303 LINK nvmf_tgt 00:02:05.303 LINK interrupt_tgt 00:02:05.303 LINK spdk_nvme_discover 00:02:05.303 LINK spdk_tgt 00:02:05.303 LINK iscsi_tgt 00:02:05.303 LINK vhost 00:02:05.303 LINK spdk_trace_record 00:02:05.303 LINK rpc_client_test 00:02:05.303 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.303 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.303 CC test/lvol/esnap/esnap.o 00:02:05.303 CC test/env/mem_callbacks/mem_callbacks.o 00:02:05.584 LINK histogram_perf 00:02:05.584 LINK led 00:02:05.584 LINK lsvmd 00:02:05.584 LINK env_dpdk_post_init 00:02:05.584 LINK reactor_perf 00:02:05.584 CXX test/cpp_headers/sock.o 00:02:05.584 LINK vtophys 00:02:05.584 CXX test/cpp_headers/stdinc.o 00:02:05.584 LINK doorbell_aers 00:02:05.584 LINK boot_partition 00:02:05.584 CXX test/cpp_headers/thread.o 00:02:05.584 CXX test/cpp_headers/string.o 00:02:05.584 CXX test/cpp_headers/trace_parser.o 00:02:05.584 LINK stub 00:02:05.584 LINK connect_stress 00:02:05.584 CXX test/cpp_headers/trace.o 00:02:05.584 CXX test/cpp_headers/tree.o 00:02:05.584 CXX test/cpp_headers/ublk.o 00:02:05.584 CXX test/cpp_headers/util.o 00:02:05.584 CXX test/cpp_headers/uuid.o 00:02:05.584 CXX test/cpp_headers/version.o 00:02:05.584 LINK poller_perf 00:02:05.584 CXX test/cpp_headers/vfio_user_pci.o 00:02:05.584 CXX test/cpp_headers/vfio_user_spec.o 00:02:05.584 CXX test/cpp_headers/vhost.o 00:02:05.584 CXX test/cpp_headers/vmd.o 00:02:05.584 LINK event_perf 00:02:05.584 CXX test/cpp_headers/xor.o 00:02:05.584 CXX test/cpp_headers/zipf.o 00:02:05.584 LINK reactor 00:02:05.584 LINK hello_world 00:02:05.584 LINK reserve 00:02:05.584 LINK app_repeat 00:02:05.584 LINK mkfs 00:02:05.584 LINK ioat_perf 00:02:05.844 LINK jsoncat 00:02:05.844 LINK hello_sock 00:02:05.844 LINK hotplug 00:02:05.844 LINK startup 00:02:05.844 LINK err_injection 00:02:05.844 LINK bdev_svc 00:02:05.844 LINK cmb_copy 00:02:05.844 LINK scheduler 00:02:05.844 LINK zipf 00:02:05.844 LINK nvme_compliance 00:02:05.844 LINK nvme_dp 00:02:05.844 LINK pmr_persistence 00:02:05.844 LINK idxd_perf 00:02:05.844 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:05.844 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:05.844 LINK verify 00:02:05.844 LINK abort 00:02:05.844 LINK hello_bdev 00:02:05.844 LINK overhead 00:02:05.844 LINK spdk_dd 00:02:05.844 LINK dif 00:02:05.844 LINK hello_blob 00:02:05.844 LINK fused_ordering 00:02:05.844 LINK simple_copy 00:02:05.844 LINK sgl 00:02:05.844 LINK spdk_trace 00:02:05.844 LINK reset 00:02:05.844 LINK aer 00:02:05.844 LINK pci_ut 00:02:05.844 LINK fdp 00:02:06.106 LINK accel_perf 00:02:06.106 LINK thread 00:02:06.106 LINK arbitration 00:02:06.106 LINK test_dma 00:02:06.106 LINK nvmf 00:02:06.106 LINK spdk_bdev 00:02:06.106 LINK spdk_nvme 00:02:06.106 LINK nvme_fuzz 00:02:06.106 LINK reconnect 00:02:06.106 LINK bdevio 00:02:06.106 LINK nvme_manage 00:02:06.106 LINK blobcli 00:02:06.368 LINK vhost_fuzz 00:02:06.368 LINK spdk_nvme_perf 00:02:06.368 LINK mem_callbacks 00:02:06.368 LINK spdk_top 00:02:06.368 LINK memory_ut 00:02:06.368 LINK spdk_nvme_identify 00:02:06.368 LINK bdevperf 00:02:06.631 LINK cuse 00:02:07.205 LINK iscsi_fuzz 00:02:09.125 LINK esnap 00:02:09.388 00:02:09.388 real 0m36.023s 00:02:09.388 user 5m19.018s 00:02:09.388 sys 3m49.306s 00:02:09.388 09:55:55 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:09.388 09:55:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:09.388 ************************************ 00:02:09.388 END TEST make 00:02:09.388 ************************************ 00:02:09.652 09:55:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:09.652 09:55:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:09.652 09:55:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:09.652 09:55:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:09.652 09:55:55 -- pm/common@44 -- $ pid=2450710 00:02:09.652 09:55:55 -- pm/common@50 -- $ kill -TERM 2450710 00:02:09.652 09:55:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:09.652 09:55:55 -- pm/common@44 -- $ pid=2450711 00:02:09.652 09:55:55 -- pm/common@50 -- $ kill -TERM 2450711 00:02:09.652 09:55:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:09.652 09:55:55 -- pm/common@44 -- $ pid=2450713 00:02:09.652 09:55:55 -- pm/common@50 -- $ kill -TERM 2450713 00:02:09.652 09:55:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:09.652 09:55:55 -- pm/common@44 -- $ pid=2450742 00:02:09.652 09:55:55 -- pm/common@50 -- $ sudo -E kill -TERM 2450742 00:02:09.652 09:55:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:09.652 09:55:55 -- nvmf/common.sh@7 -- # uname -s 00:02:09.652 09:55:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:09.652 09:55:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:09.652 09:55:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:09.652 09:55:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:09.652 09:55:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:09.652 09:55:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:09.652 09:55:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:09.652 09:55:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:09.652 09:55:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:09.652 09:55:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:09.652 09:55:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:09.652 09:55:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:09.652 09:55:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:09.652 09:55:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:09.652 09:55:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:09.652 09:55:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:09.652 09:55:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:09.652 09:55:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:09.652 09:55:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.652 09:55:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.652 09:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.652 09:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.652 09:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.652 09:55:55 -- paths/export.sh@5 -- # export PATH 00:02:09.652 09:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.652 09:55:55 -- nvmf/common.sh@47 -- # : 0 00:02:09.652 09:55:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:09.652 09:55:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:09.652 09:55:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:09.652 09:55:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:09.652 09:55:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:09.652 09:55:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:09.652 09:55:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:09.652 09:55:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:09.652 09:55:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:09.652 09:55:55 -- spdk/autotest.sh@32 -- # uname -s 00:02:09.652 09:55:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:09.652 09:55:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:09.652 09:55:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.652 09:55:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:09.652 09:55:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.652 09:55:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:09.652 09:55:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:09.652 09:55:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:09.652 09:55:55 -- spdk/autotest.sh@48 -- # udevadm_pid=2526837 00:02:09.652 09:55:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:09.652 09:55:55 -- pm/common@17 -- # local monitor 00:02:09.652 09:55:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:09.652 09:55:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.652 09:55:55 -- pm/common@21 -- # date +%s 00:02:09.652 09:55:55 -- pm/common@25 -- # sleep 1 00:02:09.652 09:55:55 -- pm/common@21 -- # date +%s 00:02:09.652 09:55:55 -- pm/common@21 -- # date +%s 00:02:09.652 09:55:55 -- pm/common@21 -- # date +%s 00:02:09.652 09:55:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715759755 00:02:09.652 09:55:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715759755 00:02:09.652 09:55:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715759755 00:02:09.652 09:55:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715759755 00:02:09.914 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715759755_collect-vmstat.pm.log 00:02:09.914 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715759755_collect-cpu-load.pm.log 00:02:09.914 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715759755_collect-cpu-temp.pm.log 00:02:09.914 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715759755_collect-bmc-pm.bmc.pm.log 00:02:10.859 09:55:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:10.859 09:55:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:10.859 09:55:56 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:10.859 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:02:10.859 09:55:56 -- spdk/autotest.sh@59 -- # create_test_list 00:02:10.859 09:55:56 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:10.859 09:55:56 -- common/autotest_common.sh@10 -- # set +x 00:02:10.859 09:55:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:10.859 09:55:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.859 09:55:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.859 09:55:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.859 09:55:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.859 09:55:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:10.859 09:55:56 -- common/autotest_common.sh@1452 -- # uname 00:02:10.859 09:55:56 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:10.859 09:55:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:10.859 09:55:56 -- common/autotest_common.sh@1472 -- # uname 00:02:10.859 09:55:56 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:10.859 09:55:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:10.859 09:55:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:10.859 09:55:56 -- spdk/autotest.sh@72 -- # hash lcov 00:02:10.859 09:55:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:10.859 09:55:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:10.859 --rc lcov_branch_coverage=1 00:02:10.859 --rc lcov_function_coverage=1 00:02:10.859 --rc genhtml_branch_coverage=1 00:02:10.859 --rc genhtml_function_coverage=1 00:02:10.859 --rc genhtml_legend=1 00:02:10.859 --rc geninfo_all_blocks=1 00:02:10.859 ' 00:02:10.859 09:55:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:10.859 --rc lcov_branch_coverage=1 00:02:10.859 --rc lcov_function_coverage=1 00:02:10.859 --rc genhtml_branch_coverage=1 00:02:10.859 --rc genhtml_function_coverage=1 00:02:10.859 --rc genhtml_legend=1 00:02:10.859 --rc geninfo_all_blocks=1 00:02:10.859 ' 00:02:10.859 09:55:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:10.859 --rc lcov_branch_coverage=1 00:02:10.859 --rc lcov_function_coverage=1 00:02:10.859 --rc genhtml_branch_coverage=1 00:02:10.859 --rc genhtml_function_coverage=1 00:02:10.859 --rc genhtml_legend=1 00:02:10.859 --rc geninfo_all_blocks=1 00:02:10.859 --no-external' 00:02:10.859 09:55:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:10.859 --rc lcov_branch_coverage=1 00:02:10.859 --rc lcov_function_coverage=1 00:02:10.859 --rc genhtml_branch_coverage=1 00:02:10.859 --rc genhtml_function_coverage=1 00:02:10.859 --rc genhtml_legend=1 00:02:10.859 --rc geninfo_all_blocks=1 00:02:10.859 --no-external' 00:02:10.859 09:55:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:10.859 lcov: LCOV version 1.14 00:02:10.859 09:55:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:23.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:23.113 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:23.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:23.113 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:23.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:23.113 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:23.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:23.113 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:38.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:38.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:38.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:38.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:39.493 09:56:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:39.493 09:56:25 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:39.493 09:56:25 -- common/autotest_common.sh@10 -- # set +x 00:02:39.493 09:56:25 -- spdk/autotest.sh@91 -- # rm -f 00:02:39.493 09:56:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.806 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:42.806 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:42.806 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:43.068 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:43.068 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:43.068 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:43.068 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:43.331 09:56:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:43.331 09:56:28 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:43.331 09:56:28 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:43.331 09:56:28 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:43.331 09:56:28 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:43.331 09:56:28 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:43.331 09:56:28 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:43.331 09:56:28 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.331 09:56:28 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:43.331 09:56:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:43.331 09:56:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:43.331 09:56:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:43.331 09:56:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:43.331 09:56:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:43.331 09:56:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:43.331 No valid GPT data, bailing 00:02:43.331 09:56:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.331 09:56:28 -- scripts/common.sh@391 -- # pt= 00:02:43.331 09:56:28 -- scripts/common.sh@392 -- # return 1 00:02:43.331 09:56:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:43.331 1+0 records in 00:02:43.331 1+0 records out 00:02:43.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00172141 s, 609 MB/s 00:02:43.331 09:56:28 -- spdk/autotest.sh@118 -- # sync 00:02:43.331 09:56:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:43.331 09:56:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:43.331 09:56:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:51.487 09:56:36 -- spdk/autotest.sh@124 -- # uname -s 00:02:51.487 09:56:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:51.487 09:56:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.487 09:56:36 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:51.487 09:56:36 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:51.487 09:56:36 -- common/autotest_common.sh@10 -- # set +x 00:02:51.487 ************************************ 00:02:51.487 START TEST setup.sh 00:02:51.487 ************************************ 00:02:51.487 09:56:36 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.487 * Looking for test storage... 00:02:51.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.487 09:56:37 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:51.487 09:56:37 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:51.487 09:56:37 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.487 09:56:37 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:51.487 09:56:37 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:51.487 09:56:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:51.487 ************************************ 00:02:51.487 START TEST acl 00:02:51.487 ************************************ 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.487 * Looking for test storage... 00:02:51.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.487 09:56:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:51.487 09:56:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:02:51.487 09:56:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:51.487 09:56:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:51.487 09:56:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:51.487 09:56:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:51.487 09:56:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:51.487 09:56:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.487 09:56:37 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.706 09:56:40 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:55.706 09:56:40 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:55.706 09:56:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:55.706 09:56:40 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:55.706 09:56:40 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.706 09:56:40 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.260 Hugepages 00:02:58.260 node hugesize free / total 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 00:02:58.523 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.523 09:56:44 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.523 09:56:44 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:02:58.523 09:56:44 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:02:58.523 09:56:44 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.785 ************************************ 00:02:58.785 START TEST denied 00:02:58.785 ************************************ 00:02:58.785 09:56:44 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:02:58.785 09:56:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:58.785 09:56:44 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.785 09:56:44 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:58.785 09:56:44 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.785 09:56:44 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:03.003 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.003 09:56:48 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.262 00:03:07.262 real 0m8.680s 00:03:07.262 user 0m2.849s 00:03:07.262 sys 0m5.091s 00:03:07.262 09:56:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:07.262 09:56:53 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:07.262 ************************************ 00:03:07.262 END TEST denied 00:03:07.262 ************************************ 00:03:07.262 09:56:53 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:07.262 09:56:53 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:07.262 09:56:53 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:07.262 09:56:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.525 ************************************ 00:03:07.525 START TEST allowed 00:03:07.525 ************************************ 00:03:07.525 09:56:53 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:03:07.525 09:56:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:07.525 09:56:53 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:07.525 09:56:53 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:07.525 09:56:53 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.525 09:56:53 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.828 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:12.828 09:56:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:12.828 09:56:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:12.828 09:56:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:12.828 09:56:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.828 09:56:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.053 00:03:17.053 real 0m9.228s 00:03:17.053 user 0m2.675s 00:03:17.053 sys 0m4.790s 00:03:17.053 09:57:02 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:17.053 09:57:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:17.053 ************************************ 00:03:17.053 END TEST allowed 00:03:17.053 ************************************ 00:03:17.053 00:03:17.053 real 0m25.262s 00:03:17.053 user 0m8.163s 00:03:17.053 sys 0m14.667s 00:03:17.053 09:57:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:17.053 09:57:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:17.053 ************************************ 00:03:17.053 END TEST acl 00:03:17.053 ************************************ 00:03:17.053 09:57:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.053 09:57:02 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:17.053 09:57:02 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:17.053 09:57:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:17.053 ************************************ 00:03:17.053 START TEST hugepages 00:03:17.053 ************************************ 00:03:17.053 09:57:02 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:17.053 * Looking for test storage... 00:03:17.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 97322840 kB' 'MemAvailable: 101841528 kB' 'Buffers: 2696 kB' 'Cached: 20031180 kB' 'SwapCached: 0 kB' 'Active: 16121972 kB' 'Inactive: 4479888 kB' 'Active(anon): 15449388 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571372 kB' 'Mapped: 216236 kB' 'Shmem: 14881404 kB' 'KReclaimable: 381092 kB' 'Slab: 1280372 kB' 'SReclaimable: 381092 kB' 'SUnreclaim: 899280 kB' 'KernelStack: 27024 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460856 kB' 'Committed_AS: 16901452 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235536 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.053 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.054 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:17.055 09:57:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:17.055 09:57:02 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:17.055 09:57:02 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:17.055 09:57:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.055 ************************************ 00:03:17.055 START TEST default_setup 00:03:17.055 ************************************ 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.055 09:57:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.368 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:20.368 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99474068 kB' 'MemAvailable: 103992736 kB' 'Buffers: 2696 kB' 'Cached: 20031308 kB' 'SwapCached: 0 kB' 'Active: 16135908 kB' 'Inactive: 4479888 kB' 'Active(anon): 15463324 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 584340 kB' 'Mapped: 216488 kB' 'Shmem: 14881532 kB' 'KReclaimable: 381052 kB' 'Slab: 1277652 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896600 kB' 'KernelStack: 27120 kB' 'PageTables: 9388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16915604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235500 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.636 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.637 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99470168 kB' 'MemAvailable: 103988836 kB' 'Buffers: 2696 kB' 'Cached: 20031308 kB' 'SwapCached: 0 kB' 'Active: 16138556 kB' 'Inactive: 4479888 kB' 'Active(anon): 15465972 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587732 kB' 'Mapped: 216684 kB' 'Shmem: 14881532 kB' 'KReclaimable: 381052 kB' 'Slab: 1277660 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896608 kB' 'KernelStack: 27088 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16918800 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235488 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.638 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.639 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99470612 kB' 'MemAvailable: 103989280 kB' 'Buffers: 2696 kB' 'Cached: 20031348 kB' 'SwapCached: 0 kB' 'Active: 16139124 kB' 'Inactive: 4479888 kB' 'Active(anon): 15466540 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 588356 kB' 'Mapped: 216608 kB' 'Shmem: 14881572 kB' 'KReclaimable: 381052 kB' 'Slab: 1277700 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896648 kB' 'KernelStack: 27120 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16919192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235504 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.640 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.641 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.642 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.643 nr_hugepages=1024 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.643 resv_hugepages=0 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.643 surplus_hugepages=0 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.643 anon_hugepages=0 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99470612 kB' 'MemAvailable: 103989280 kB' 'Buffers: 2696 kB' 'Cached: 20031384 kB' 'SwapCached: 0 kB' 'Active: 16138508 kB' 'Inactive: 4479888 kB' 'Active(anon): 15465924 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587688 kB' 'Mapped: 216608 kB' 'Shmem: 14881608 kB' 'KReclaimable: 381052 kB' 'Slab: 1277704 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896652 kB' 'KernelStack: 27104 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16919212 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235504 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.643 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.644 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57364136 kB' 'MemUsed: 8294872 kB' 'SwapCached: 0 kB' 'Active: 3220168 kB' 'Inactive: 285212 kB' 'Active(anon): 2682856 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3204920 kB' 'Mapped: 104844 kB' 'AnonPages: 303748 kB' 'Shmem: 2382396 kB' 'KernelStack: 15640 kB' 'PageTables: 4864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183772 kB' 'Slab: 695112 kB' 'SReclaimable: 183772 kB' 'SUnreclaim: 511340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.645 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.646 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.647 node0=1024 expecting 1024 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.647 00:03:20.647 real 0m3.746s 00:03:20.647 user 0m1.306s 00:03:20.647 sys 0m2.429s 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:20.647 09:57:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.647 ************************************ 00:03:20.647 END TEST default_setup 00:03:20.647 ************************************ 00:03:20.909 09:57:06 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:20.909 09:57:06 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:20.909 09:57:06 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:20.909 09:57:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.909 ************************************ 00:03:20.909 START TEST per_node_1G_alloc 00:03:20.909 ************************************ 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.909 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:20.910 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:20.910 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:20.910 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.910 09:57:06 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.220 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:24.220 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:24.220 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99477472 kB' 'MemAvailable: 103996140 kB' 'Buffers: 2696 kB' 'Cached: 20031476 kB' 'SwapCached: 0 kB' 'Active: 16133228 kB' 'Inactive: 4479888 kB' 'Active(anon): 15460644 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582400 kB' 'Mapped: 214740 kB' 'Shmem: 14881700 kB' 'KReclaimable: 381052 kB' 'Slab: 1277912 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896860 kB' 'KernelStack: 26848 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16899920 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235628 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.488 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99477316 kB' 'MemAvailable: 103995984 kB' 'Buffers: 2696 kB' 'Cached: 20031480 kB' 'SwapCached: 0 kB' 'Active: 16133256 kB' 'Inactive: 4479888 kB' 'Active(anon): 15460672 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582844 kB' 'Mapped: 214668 kB' 'Shmem: 14881704 kB' 'KReclaimable: 381052 kB' 'Slab: 1277896 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896844 kB' 'KernelStack: 26976 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16900148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235756 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.489 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.490 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99478848 kB' 'MemAvailable: 103997516 kB' 'Buffers: 2696 kB' 'Cached: 20031500 kB' 'SwapCached: 0 kB' 'Active: 16132960 kB' 'Inactive: 4479888 kB' 'Active(anon): 15460376 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582544 kB' 'Mapped: 214660 kB' 'Shmem: 14881724 kB' 'KReclaimable: 381052 kB' 'Slab: 1277896 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896844 kB' 'KernelStack: 27136 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16900172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235756 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.491 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.492 nr_hugepages=1024 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.492 resv_hugepages=0 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.492 surplus_hugepages=0 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.492 anon_hugepages=0 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99479480 kB' 'MemAvailable: 103998148 kB' 'Buffers: 2696 kB' 'Cached: 20031524 kB' 'SwapCached: 0 kB' 'Active: 16133160 kB' 'Inactive: 4479888 kB' 'Active(anon): 15460576 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582660 kB' 'Mapped: 214660 kB' 'Shmem: 14881748 kB' 'KReclaimable: 381052 kB' 'Slab: 1277896 kB' 'SReclaimable: 381052 kB' 'SUnreclaim: 896844 kB' 'KernelStack: 27056 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16898580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235772 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.492 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.493 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.759 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58416732 kB' 'MemUsed: 7242276 kB' 'SwapCached: 0 kB' 'Active: 3222844 kB' 'Inactive: 285212 kB' 'Active(anon): 2685532 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3204992 kB' 'Mapped: 104140 kB' 'AnonPages: 306308 kB' 'Shmem: 2382468 kB' 'KernelStack: 15656 kB' 'PageTables: 5128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183772 kB' 'Slab: 695296 kB' 'SReclaimable: 183772 kB' 'SUnreclaim: 511524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.760 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 41064908 kB' 'MemUsed: 19614896 kB' 'SwapCached: 0 kB' 'Active: 12910456 kB' 'Inactive: 4194676 kB' 'Active(anon): 12775184 kB' 'Inactive(anon): 0 kB' 'Active(file): 135272 kB' 'Inactive(file): 4194676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 16829272 kB' 'Mapped: 110520 kB' 'AnonPages: 276412 kB' 'Shmem: 12499324 kB' 'KernelStack: 11400 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197280 kB' 'Slab: 582600 kB' 'SReclaimable: 197280 kB' 'SUnreclaim: 385320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.761 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.762 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.763 node0=512 expecting 512 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.763 node1=512 expecting 512 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.763 00:03:24.763 real 0m3.875s 00:03:24.763 user 0m1.553s 00:03:24.763 sys 0m2.376s 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:24.763 09:57:10 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.763 ************************************ 00:03:24.763 END TEST per_node_1G_alloc 00:03:24.763 ************************************ 00:03:24.763 09:57:10 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.763 09:57:10 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:24.763 09:57:10 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:24.763 09:57:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.763 ************************************ 00:03:24.763 START TEST even_2G_alloc 00:03:24.763 ************************************ 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.763 09:57:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.078 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:28.078 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:28.078 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99504664 kB' 'MemAvailable: 104023312 kB' 'Buffers: 2696 kB' 'Cached: 20031664 kB' 'SwapCached: 0 kB' 'Active: 16130860 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458276 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579592 kB' 'Mapped: 214684 kB' 'Shmem: 14881888 kB' 'KReclaimable: 381012 kB' 'Slab: 1278728 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897716 kB' 'KernelStack: 27072 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16900980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.396 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99504040 kB' 'MemAvailable: 104022688 kB' 'Buffers: 2696 kB' 'Cached: 20031668 kB' 'SwapCached: 0 kB' 'Active: 16131076 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458492 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580020 kB' 'Mapped: 214680 kB' 'Shmem: 14881892 kB' 'KReclaimable: 381012 kB' 'Slab: 1278656 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897644 kB' 'KernelStack: 27056 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16900864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235916 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.397 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.398 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99504964 kB' 'MemAvailable: 104023612 kB' 'Buffers: 2696 kB' 'Cached: 20031684 kB' 'SwapCached: 0 kB' 'Active: 16130860 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458276 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579708 kB' 'Mapped: 214604 kB' 'Shmem: 14881908 kB' 'KReclaimable: 381012 kB' 'Slab: 1278408 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897396 kB' 'KernelStack: 27168 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16901020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.399 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.400 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.401 nr_hugepages=1024 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.401 resv_hugepages=0 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.401 surplus_hugepages=0 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.401 anon_hugepages=0 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99504428 kB' 'MemAvailable: 104023076 kB' 'Buffers: 2696 kB' 'Cached: 20031704 kB' 'SwapCached: 0 kB' 'Active: 16131192 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458608 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580036 kB' 'Mapped: 214604 kB' 'Shmem: 14881928 kB' 'KReclaimable: 381012 kB' 'Slab: 1278412 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897400 kB' 'KernelStack: 27152 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16901040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235948 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.401 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.402 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58443588 kB' 'MemUsed: 7215420 kB' 'SwapCached: 0 kB' 'Active: 3221060 kB' 'Inactive: 285212 kB' 'Active(anon): 2683748 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3205036 kB' 'Mapped: 104104 kB' 'AnonPages: 304348 kB' 'Shmem: 2382512 kB' 'KernelStack: 15736 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183732 kB' 'Slab: 695280 kB' 'SReclaimable: 183732 kB' 'SUnreclaim: 511548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.403 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.668 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 41062212 kB' 'MemUsed: 19617592 kB' 'SwapCached: 0 kB' 'Active: 12909628 kB' 'Inactive: 4194676 kB' 'Active(anon): 12774356 kB' 'Inactive(anon): 0 kB' 'Active(file): 135272 kB' 'Inactive(file): 4194676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 16829408 kB' 'Mapped: 110500 kB' 'AnonPages: 275128 kB' 'Shmem: 12499460 kB' 'KernelStack: 11448 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197280 kB' 'Slab: 583132 kB' 'SReclaimable: 197280 kB' 'SUnreclaim: 385852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.669 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.670 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.671 node0=512 expecting 512 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:28.671 node1=512 expecting 512 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:28.671 00:03:28.671 real 0m3.786s 00:03:28.671 user 0m1.529s 00:03:28.671 sys 0m2.309s 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:28.671 09:57:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:28.671 ************************************ 00:03:28.671 END TEST even_2G_alloc 00:03:28.671 ************************************ 00:03:28.671 09:57:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:28.671 09:57:14 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:28.671 09:57:14 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:28.671 09:57:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.671 ************************************ 00:03:28.671 START TEST odd_alloc 00:03:28.671 ************************************ 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.671 09:57:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.982 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:31.982 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:31.982 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99537068 kB' 'MemAvailable: 104055716 kB' 'Buffers: 2696 kB' 'Cached: 20031844 kB' 'SwapCached: 0 kB' 'Active: 16132124 kB' 'Inactive: 4479888 kB' 'Active(anon): 15459540 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580648 kB' 'Mapped: 214656 kB' 'Shmem: 14882068 kB' 'KReclaimable: 381012 kB' 'Slab: 1278376 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897364 kB' 'KernelStack: 27040 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 16898960 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.249 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.250 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99538560 kB' 'MemAvailable: 104057208 kB' 'Buffers: 2696 kB' 'Cached: 20031844 kB' 'SwapCached: 0 kB' 'Active: 16132300 kB' 'Inactive: 4479888 kB' 'Active(anon): 15459716 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580900 kB' 'Mapped: 214628 kB' 'Shmem: 14882068 kB' 'KReclaimable: 381012 kB' 'Slab: 1278368 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897356 kB' 'KernelStack: 27024 kB' 'PageTables: 8908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 16898976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.251 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.252 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99539808 kB' 'MemAvailable: 104058456 kB' 'Buffers: 2696 kB' 'Cached: 20031864 kB' 'SwapCached: 0 kB' 'Active: 16132304 kB' 'Inactive: 4479888 kB' 'Active(anon): 15459720 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580868 kB' 'Mapped: 214628 kB' 'Shmem: 14882088 kB' 'KReclaimable: 381012 kB' 'Slab: 1278364 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897352 kB' 'KernelStack: 27024 kB' 'PageTables: 8932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 16898996 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.253 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:17 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.254 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:32.255 nr_hugepages=1025 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.255 resv_hugepages=0 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.255 surplus_hugepages=0 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.255 anon_hugepages=0 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99540476 kB' 'MemAvailable: 104059124 kB' 'Buffers: 2696 kB' 'Cached: 20031884 kB' 'SwapCached: 0 kB' 'Active: 16132312 kB' 'Inactive: 4479888 kB' 'Active(anon): 15459728 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580864 kB' 'Mapped: 214628 kB' 'Shmem: 14882108 kB' 'KReclaimable: 381012 kB' 'Slab: 1278364 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897352 kB' 'KernelStack: 27024 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508408 kB' 'Committed_AS: 16899020 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.255 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.256 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58477808 kB' 'MemUsed: 7181200 kB' 'SwapCached: 0 kB' 'Active: 3222076 kB' 'Inactive: 285212 kB' 'Active(anon): 2684764 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3205124 kB' 'Mapped: 104104 kB' 'AnonPages: 305324 kB' 'Shmem: 2382600 kB' 'KernelStack: 15544 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183732 kB' 'Slab: 695248 kB' 'SReclaimable: 183732 kB' 'SUnreclaim: 511516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.257 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.521 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.521 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.522 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 41062668 kB' 'MemUsed: 19617136 kB' 'SwapCached: 0 kB' 'Active: 12910236 kB' 'Inactive: 4194676 kB' 'Active(anon): 12774964 kB' 'Inactive(anon): 0 kB' 'Active(file): 135272 kB' 'Inactive(file): 4194676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 16829456 kB' 'Mapped: 110524 kB' 'AnonPages: 275540 kB' 'Shmem: 12499508 kB' 'KernelStack: 11480 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197280 kB' 'Slab: 583116 kB' 'SReclaimable: 197280 kB' 'SUnreclaim: 385836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.523 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:32.524 node0=512 expecting 513 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:32.524 node1=513 expecting 512 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:32.524 00:03:32.524 real 0m3.781s 00:03:32.524 user 0m1.537s 00:03:32.524 sys 0m2.297s 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:32.524 09:57:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.524 ************************************ 00:03:32.524 END TEST odd_alloc 00:03:32.524 ************************************ 00:03:32.524 09:57:18 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:32.524 09:57:18 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:32.524 09:57:18 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:32.524 09:57:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.524 ************************************ 00:03:32.524 START TEST custom_alloc 00:03:32.524 ************************************ 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.524 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.525 09:57:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.835 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:35.835 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:35.835 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 98499164 kB' 'MemAvailable: 103017812 kB' 'Buffers: 2696 kB' 'Cached: 20032008 kB' 'SwapCached: 0 kB' 'Active: 16131892 kB' 'Inactive: 4479888 kB' 'Active(anon): 15459308 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 580484 kB' 'Mapped: 214728 kB' 'Shmem: 14882232 kB' 'KReclaimable: 381012 kB' 'Slab: 1278364 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897352 kB' 'KernelStack: 27008 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 16899536 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.099 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.100 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 98500116 kB' 'MemAvailable: 103018764 kB' 'Buffers: 2696 kB' 'Cached: 20032012 kB' 'SwapCached: 0 kB' 'Active: 16131408 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458824 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579860 kB' 'Mapped: 214652 kB' 'Shmem: 14882236 kB' 'KReclaimable: 381012 kB' 'Slab: 1278400 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897388 kB' 'KernelStack: 26992 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 16899556 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.101 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.102 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.368 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.368 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.368 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.368 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.368 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.368 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.369 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 98500776 kB' 'MemAvailable: 103019424 kB' 'Buffers: 2696 kB' 'Cached: 20032032 kB' 'SwapCached: 0 kB' 'Active: 16131432 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458848 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579856 kB' 'Mapped: 214652 kB' 'Shmem: 14882256 kB' 'KReclaimable: 381012 kB' 'Slab: 1278400 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897388 kB' 'KernelStack: 26992 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 16899712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.370 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:36.371 nr_hugepages=1536 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.371 resv_hugepages=0 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.371 surplus_hugepages=0 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.371 anon_hugepages=0 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.371 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 98501560 kB' 'MemAvailable: 103020208 kB' 'Buffers: 2696 kB' 'Cached: 20032072 kB' 'SwapCached: 0 kB' 'Active: 16131116 kB' 'Inactive: 4479888 kB' 'Active(anon): 15458532 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579468 kB' 'Mapped: 214652 kB' 'Shmem: 14882296 kB' 'KReclaimable: 381012 kB' 'Slab: 1278400 kB' 'SReclaimable: 381012 kB' 'SUnreclaim: 897388 kB' 'KernelStack: 26976 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985144 kB' 'Committed_AS: 16899736 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.372 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.373 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58495108 kB' 'MemUsed: 7163900 kB' 'SwapCached: 0 kB' 'Active: 3220728 kB' 'Inactive: 285212 kB' 'Active(anon): 2683416 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3205280 kB' 'Mapped: 104104 kB' 'AnonPages: 303804 kB' 'Shmem: 2382756 kB' 'KernelStack: 15528 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183732 kB' 'Slab: 695432 kB' 'SReclaimable: 183732 kB' 'SUnreclaim: 511700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.374 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679804 kB' 'MemFree: 40006376 kB' 'MemUsed: 20673428 kB' 'SwapCached: 0 kB' 'Active: 12911144 kB' 'Inactive: 4194676 kB' 'Active(anon): 12775872 kB' 'Inactive(anon): 0 kB' 'Active(file): 135272 kB' 'Inactive(file): 4194676 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 16829508 kB' 'Mapped: 110548 kB' 'AnonPages: 276472 kB' 'Shmem: 12499560 kB' 'KernelStack: 11496 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197280 kB' 'Slab: 582968 kB' 'SReclaimable: 197280 kB' 'SUnreclaim: 385688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.375 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.376 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.377 node0=512 expecting 512 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:36.377 node1=1024 expecting 1024 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:36.377 00:03:36.377 real 0m3.861s 00:03:36.377 user 0m1.531s 00:03:36.377 sys 0m2.385s 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:36.377 09:57:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.377 ************************************ 00:03:36.377 END TEST custom_alloc 00:03:36.377 ************************************ 00:03:36.377 09:57:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.377 09:57:22 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:36.377 09:57:22 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:36.377 09:57:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.377 ************************************ 00:03:36.377 START TEST no_shrink_alloc 00:03:36.377 ************************************ 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.377 09:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.692 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:39.692 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:39.692 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.273 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99551944 kB' 'MemAvailable: 104070544 kB' 'Buffers: 2696 kB' 'Cached: 20032204 kB' 'SwapCached: 0 kB' 'Active: 16140064 kB' 'Inactive: 4479888 kB' 'Active(anon): 15467480 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 587604 kB' 'Mapped: 215784 kB' 'Shmem: 14882428 kB' 'KReclaimable: 380916 kB' 'Slab: 1278304 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 897388 kB' 'KernelStack: 27024 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16908148 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235744 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.274 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99556492 kB' 'MemAvailable: 104075092 kB' 'Buffers: 2696 kB' 'Cached: 20032204 kB' 'SwapCached: 0 kB' 'Active: 16134076 kB' 'Inactive: 4479888 kB' 'Active(anon): 15461492 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581964 kB' 'Mapped: 215048 kB' 'Shmem: 14882428 kB' 'KReclaimable: 380916 kB' 'Slab: 1278236 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 897320 kB' 'KernelStack: 27008 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16902048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235676 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.275 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.276 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99560720 kB' 'MemAvailable: 104079320 kB' 'Buffers: 2696 kB' 'Cached: 20032224 kB' 'SwapCached: 0 kB' 'Active: 16134024 kB' 'Inactive: 4479888 kB' 'Active(anon): 15461440 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582508 kB' 'Mapped: 214692 kB' 'Shmem: 14882448 kB' 'KReclaimable: 380916 kB' 'Slab: 1278224 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 897308 kB' 'KernelStack: 27136 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16902068 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235804 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.277 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.278 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.279 nr_hugepages=1024 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.279 resv_hugepages=0 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.279 surplus_hugepages=0 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.279 anon_hugepages=0 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99564864 kB' 'MemAvailable: 104083464 kB' 'Buffers: 2696 kB' 'Cached: 20032244 kB' 'SwapCached: 0 kB' 'Active: 16133252 kB' 'Inactive: 4479888 kB' 'Active(anon): 15460668 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581616 kB' 'Mapped: 214692 kB' 'Shmem: 14882468 kB' 'KReclaimable: 380916 kB' 'Slab: 1278224 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 897308 kB' 'KernelStack: 26992 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16903636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235804 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.279 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.280 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57456832 kB' 'MemUsed: 8202176 kB' 'SwapCached: 0 kB' 'Active: 3220188 kB' 'Inactive: 285212 kB' 'Active(anon): 2682876 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3205344 kB' 'Mapped: 104104 kB' 'AnonPages: 303220 kB' 'Shmem: 2382820 kB' 'KernelStack: 15624 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183636 kB' 'Slab: 695364 kB' 'SReclaimable: 183636 kB' 'SUnreclaim: 511728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.281 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.282 node0=1024 expecting 1024 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.282 09:57:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.598 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:43.598 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:43.598 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99545816 kB' 'MemAvailable: 104064416 kB' 'Buffers: 2696 kB' 'Cached: 20032348 kB' 'SwapCached: 0 kB' 'Active: 16134096 kB' 'Inactive: 4479888 kB' 'Active(anon): 15461512 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582176 kB' 'Mapped: 214732 kB' 'Shmem: 14882572 kB' 'KReclaimable: 380916 kB' 'Slab: 1277200 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 896284 kB' 'KernelStack: 26944 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16902364 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.598 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.599 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99548364 kB' 'MemAvailable: 104066964 kB' 'Buffers: 2696 kB' 'Cached: 20032352 kB' 'SwapCached: 0 kB' 'Active: 16134624 kB' 'Inactive: 4479888 kB' 'Active(anon): 15462040 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582772 kB' 'Mapped: 214732 kB' 'Shmem: 14882576 kB' 'KReclaimable: 380916 kB' 'Slab: 1277232 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 896316 kB' 'KernelStack: 27056 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16902884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235772 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.600 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.601 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99550692 kB' 'MemAvailable: 104069292 kB' 'Buffers: 2696 kB' 'Cached: 20032368 kB' 'SwapCached: 0 kB' 'Active: 16134128 kB' 'Inactive: 4479888 kB' 'Active(anon): 15461544 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582256 kB' 'Mapped: 214708 kB' 'Shmem: 14882592 kB' 'KReclaimable: 380916 kB' 'Slab: 1277288 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 896372 kB' 'KernelStack: 26912 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16902540 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235692 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.602 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:43.603 nr_hugepages=1024 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.603 resv_hugepages=0 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.603 surplus_hugepages=0 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.603 anon_hugepages=0 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.603 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338812 kB' 'MemFree: 99552120 kB' 'MemAvailable: 104070720 kB' 'Buffers: 2696 kB' 'Cached: 20032408 kB' 'SwapCached: 0 kB' 'Active: 16134704 kB' 'Inactive: 4479888 kB' 'Active(anon): 15462120 kB' 'Inactive(anon): 0 kB' 'Active(file): 672584 kB' 'Inactive(file): 4479888 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582864 kB' 'Mapped: 214708 kB' 'Shmem: 14882632 kB' 'KReclaimable: 380916 kB' 'Slab: 1277288 kB' 'SReclaimable: 380916 kB' 'SUnreclaim: 896372 kB' 'KernelStack: 27200 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509432 kB' 'Committed_AS: 16902932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 159552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 5713268 kB' 'DirectMap2M: 38006784 kB' 'DirectMap1G: 92274688 kB' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.604 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.605 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57464304 kB' 'MemUsed: 8194704 kB' 'SwapCached: 0 kB' 'Active: 3222680 kB' 'Inactive: 285212 kB' 'Active(anon): 2685368 kB' 'Inactive(anon): 0 kB' 'Active(file): 537312 kB' 'Inactive(file): 285212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3205368 kB' 'Mapped: 104104 kB' 'AnonPages: 305764 kB' 'Shmem: 2382844 kB' 'KernelStack: 15720 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 183636 kB' 'Slab: 694688 kB' 'SReclaimable: 183636 kB' 'SUnreclaim: 511052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.606 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:43.607 node0=1024 expecting 1024 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:43.607 00:03:43.607 real 0m7.208s 00:03:43.607 user 0m2.701s 00:03:43.607 sys 0m4.513s 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:43.607 09:57:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.607 ************************************ 00:03:43.607 END TEST no_shrink_alloc 00:03:43.607 ************************************ 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.607 09:57:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.607 00:03:43.607 real 0m26.941s 00:03:43.607 user 0m10.413s 00:03:43.607 sys 0m16.752s 00:03:43.607 09:57:29 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:43.607 09:57:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.607 ************************************ 00:03:43.607 END TEST hugepages 00:03:43.607 ************************************ 00:03:43.870 09:57:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:43.870 09:57:29 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:43.870 09:57:29 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:43.870 09:57:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:43.870 ************************************ 00:03:43.870 START TEST driver 00:03:43.870 ************************************ 00:03:43.870 09:57:29 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:43.870 * Looking for test storage... 00:03:43.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:43.870 09:57:29 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:43.870 09:57:29 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.870 09:57:29 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.173 09:57:34 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:49.173 09:57:34 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:49.173 09:57:34 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:49.173 09:57:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.173 ************************************ 00:03:49.173 START TEST guess_driver 00:03:49.173 ************************************ 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:49.173 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:49.174 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:49.174 Looking for driver=vfio-pci 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.174 09:57:34 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.490 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.753 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:52.753 09:57:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:52.753 09:57:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.753 09:57:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.056 00:03:58.056 real 0m8.736s 00:03:58.056 user 0m2.878s 00:03:58.056 sys 0m5.051s 00:03:58.056 09:57:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:58.056 09:57:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.056 ************************************ 00:03:58.056 END TEST guess_driver 00:03:58.056 ************************************ 00:03:58.056 00:03:58.056 real 0m13.844s 00:03:58.056 user 0m4.451s 00:03:58.056 sys 0m7.814s 00:03:58.056 09:57:43 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:58.056 09:57:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.056 ************************************ 00:03:58.056 END TEST driver 00:03:58.056 ************************************ 00:03:58.056 09:57:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:58.056 09:57:43 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:58.056 09:57:43 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:58.056 09:57:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.056 ************************************ 00:03:58.056 START TEST devices 00:03:58.056 ************************************ 00:03:58.056 09:57:43 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:58.056 * Looking for test storage... 00:03:58.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:58.056 09:57:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:58.056 09:57:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:58.056 09:57:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.056 09:57:43 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:02.276 09:57:47 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:02.276 No valid GPT data, bailing 00:04:02.276 09:57:47 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:02.276 09:57:47 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:02.276 09:57:47 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:02.276 09:57:47 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:02.276 09:57:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:02.276 ************************************ 00:04:02.276 START TEST nvme_mount 00:04:02.276 ************************************ 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:02.276 09:57:47 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:02.852 Creating new GPT entries in memory. 00:04:02.852 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:02.852 other utilities. 00:04:02.852 09:57:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:02.852 09:57:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.852 09:57:48 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.852 09:57:48 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.852 09:57:48 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:04.240 Creating new GPT entries in memory. 00:04:04.240 The operation has completed successfully. 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2567199 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.240 09:57:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:07.547 09:57:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.547 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.547 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:07.547 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.547 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.547 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.547 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:07.548 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.548 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.548 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.548 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.548 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.548 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.548 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.809 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.809 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.809 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.809 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.809 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:07.809 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:07.809 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.809 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:07.809 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:07.809 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.071 09:57:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.379 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:11.380 09:57:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.642 09:57:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:14.957 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.220 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.220 00:04:15.220 real 0m13.277s 00:04:15.220 user 0m4.078s 00:04:15.220 sys 0m7.037s 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:15.220 09:58:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:15.220 ************************************ 00:04:15.220 END TEST nvme_mount 00:04:15.220 ************************************ 00:04:15.220 09:58:00 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:15.220 09:58:00 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:15.220 09:58:00 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:15.220 09:58:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:15.220 ************************************ 00:04:15.220 START TEST dm_mount 00:04:15.220 ************************************ 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.220 09:58:00 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:16.212 Creating new GPT entries in memory. 00:04:16.212 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.212 other utilities. 00:04:16.212 09:58:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.212 09:58:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.212 09:58:01 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.212 09:58:01 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.212 09:58:01 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:17.602 Creating new GPT entries in memory. 00:04:17.602 The operation has completed successfully. 00:04:17.602 09:58:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.602 09:58:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.602 09:58:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.602 09:58:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.602 09:58:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:18.547 The operation has completed successfully. 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2572254 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.547 09:58:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:21.860 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.122 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.122 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:22.122 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.123 09:58:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:25.431 09:58:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.431 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.431 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.431 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:25.431 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:25.432 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:25.432 00:04:25.432 real 0m10.172s 00:04:25.432 user 0m2.506s 00:04:25.432 sys 0m4.685s 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:25.432 09:58:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:25.432 ************************************ 00:04:25.432 END TEST dm_mount 00:04:25.432 ************************************ 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.432 09:58:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.694 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:25.694 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:25.694 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:25.694 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.694 09:58:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:25.694 00:04:25.694 real 0m28.076s 00:04:25.694 user 0m8.209s 00:04:25.694 sys 0m14.603s 00:04:25.694 09:58:11 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:25.694 09:58:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.694 ************************************ 00:04:25.694 END TEST devices 00:04:25.694 ************************************ 00:04:25.956 00:04:25.956 real 1m34.557s 00:04:25.956 user 0m31.400s 00:04:25.956 sys 0m54.120s 00:04:25.956 09:58:11 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:25.956 09:58:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.956 ************************************ 00:04:25.956 END TEST setup.sh 00:04:25.956 ************************************ 00:04:25.956 09:58:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.267 Hugepages 00:04:29.267 node hugesize free / total 00:04:29.267 node0 1048576kB 0 / 0 00:04:29.267 node0 2048kB 2048 / 2048 00:04:29.267 node1 1048576kB 0 / 0 00:04:29.267 node1 2048kB 0 / 0 00:04:29.267 00:04:29.267 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.267 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:29.267 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:29.267 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:29.267 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:29.267 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:29.267 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:29.529 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:29.529 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:29.529 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:29.529 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:29.529 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:29.529 09:58:15 -- spdk/autotest.sh@130 -- # uname -s 00:04:29.529 09:58:15 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:29.529 09:58:15 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:29.529 09:58:15 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.841 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:32.841 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:33.104 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:33.104 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:35.025 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:35.025 09:58:20 -- common/autotest_common.sh@1529 -- # sleep 1 00:04:35.971 09:58:21 -- common/autotest_common.sh@1530 -- # bdfs=() 00:04:35.971 09:58:21 -- common/autotest_common.sh@1530 -- # local bdfs 00:04:35.971 09:58:21 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.971 09:58:21 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:04:35.971 09:58:21 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:35.971 09:58:21 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:35.971 09:58:21 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.971 09:58:21 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.971 09:58:21 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:36.232 09:58:21 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:36.232 09:58:21 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:65:00.0 00:04:36.232 09:58:21 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.572 Waiting for block devices as requested 00:04:39.572 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:39.572 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:39.572 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:39.834 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:39.834 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:39.834 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:39.834 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:40.096 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:40.096 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:40.358 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:40.358 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:40.358 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:40.620 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:40.621 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:40.621 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:40.621 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:40.883 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:41.145 09:58:26 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:04:41.145 09:58:26 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1499 -- # grep 0000:65:00.0/nvme/nvme 00:04:41.145 09:58:26 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:41.145 09:58:26 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:04:41.145 09:58:26 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:04:41.145 09:58:26 -- common/autotest_common.sh@1542 -- # grep oacs 00:04:41.145 09:58:26 -- common/autotest_common.sh@1542 -- # oacs=' 0x5f' 00:04:41.145 09:58:26 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:04:41.145 09:58:26 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:04:41.145 09:58:26 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:04:41.145 09:58:26 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:04:41.145 09:58:26 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:04:41.145 09:58:26 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:04:41.145 09:58:26 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:04:41.145 09:58:26 -- common/autotest_common.sh@1554 -- # continue 00:04:41.145 09:58:26 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:41.145 09:58:26 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:41.145 09:58:26 -- common/autotest_common.sh@10 -- # set +x 00:04:41.145 09:58:26 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:41.145 09:58:26 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:41.145 09:58:26 -- common/autotest_common.sh@10 -- # set +x 00:04:41.145 09:58:26 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.485 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:44.485 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:44.798 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:44.798 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:44.798 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:44.798 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:45.060 09:58:30 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:45.060 09:58:30 -- common/autotest_common.sh@727 -- # xtrace_disable 00:04:45.060 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:04:45.060 09:58:30 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:45.060 09:58:30 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:04:45.060 09:58:30 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.060 09:58:30 -- common/autotest_common.sh@1574 -- # bdfs=() 00:04:45.060 09:58:30 -- common/autotest_common.sh@1574 -- # local bdfs 00:04:45.060 09:58:30 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:04:45.060 09:58:30 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:45.060 09:58:30 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:45.060 09:58:30 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.060 09:58:30 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.060 09:58:30 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:45.060 09:58:30 -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:04:45.060 09:58:30 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:65:00.0 00:04:45.060 09:58:30 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:04:45.060 09:58:30 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:45.060 09:58:30 -- common/autotest_common.sh@1577 -- # device=0xa80a 00:04:45.060 09:58:30 -- common/autotest_common.sh@1578 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:45.060 09:58:30 -- common/autotest_common.sh@1583 -- # printf '%s\n' 00:04:45.060 09:58:30 -- common/autotest_common.sh@1589 -- # [[ -z '' ]] 00:04:45.060 09:58:30 -- common/autotest_common.sh@1590 -- # return 0 00:04:45.060 09:58:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:45.060 09:58:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:45.060 09:58:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.060 09:58:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:45.060 09:58:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:45.060 09:58:30 -- common/autotest_common.sh@721 -- # xtrace_disable 00:04:45.060 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:04:45.060 09:58:30 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.060 09:58:30 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.060 09:58:30 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.060 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:04:45.060 ************************************ 00:04:45.060 START TEST env 00:04:45.060 ************************************ 00:04:45.060 09:58:30 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:45.323 * Looking for test storage... 00:04:45.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:45.323 09:58:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.323 09:58:30 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.323 09:58:30 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.323 09:58:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.323 ************************************ 00:04:45.323 START TEST env_memory 00:04:45.323 ************************************ 00:04:45.323 09:58:30 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:45.323 00:04:45.323 00:04:45.323 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.323 http://cunit.sourceforge.net/ 00:04:45.323 00:04:45.323 00:04:45.323 Suite: memory 00:04:45.323 Test: alloc and free memory map ...[2024-05-15 09:58:31.031624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.323 passed 00:04:45.324 Test: mem map translation ...[2024-05-15 09:58:31.057351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.324 [2024-05-15 09:58:31.057382] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.324 [2024-05-15 09:58:31.057431] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.324 [2024-05-15 09:58:31.057438] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.324 passed 00:04:45.324 Test: mem map registration ...[2024-05-15 09:58:31.112889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:45.324 [2024-05-15 09:58:31.112915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:45.587 passed 00:04:45.587 Test: mem map adjacent registrations ...passed 00:04:45.587 00:04:45.587 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.587 suites 1 1 n/a 0 0 00:04:45.587 tests 4 4 4 0 0 00:04:45.587 asserts 152 152 152 0 n/a 00:04:45.587 00:04:45.587 Elapsed time = 0.193 seconds 00:04:45.587 00:04:45.587 real 0m0.208s 00:04:45.587 user 0m0.197s 00:04:45.587 sys 0m0.010s 00:04:45.587 09:58:31 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:45.587 09:58:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:45.587 ************************************ 00:04:45.587 END TEST env_memory 00:04:45.587 ************************************ 00:04:45.587 09:58:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.587 09:58:31 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:45.587 09:58:31 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:45.587 09:58:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.587 ************************************ 00:04:45.587 START TEST env_vtophys 00:04:45.587 ************************************ 00:04:45.587 09:58:31 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:45.587 EAL: lib.eal log level changed from notice to debug 00:04:45.588 EAL: Detected lcore 0 as core 0 on socket 0 00:04:45.588 EAL: Detected lcore 1 as core 1 on socket 0 00:04:45.588 EAL: Detected lcore 2 as core 2 on socket 0 00:04:45.588 EAL: Detected lcore 3 as core 3 on socket 0 00:04:45.588 EAL: Detected lcore 4 as core 4 on socket 0 00:04:45.588 EAL: Detected lcore 5 as core 5 on socket 0 00:04:45.588 EAL: Detected lcore 6 as core 6 on socket 0 00:04:45.588 EAL: Detected lcore 7 as core 7 on socket 0 00:04:45.588 EAL: Detected lcore 8 as core 8 on socket 0 00:04:45.588 EAL: Detected lcore 9 as core 9 on socket 0 00:04:45.588 EAL: Detected lcore 10 as core 10 on socket 0 00:04:45.588 EAL: Detected lcore 11 as core 11 on socket 0 00:04:45.588 EAL: Detected lcore 12 as core 12 on socket 0 00:04:45.588 EAL: Detected lcore 13 as core 13 on socket 0 00:04:45.588 EAL: Detected lcore 14 as core 14 on socket 0 00:04:45.588 EAL: Detected lcore 15 as core 15 on socket 0 00:04:45.588 EAL: Detected lcore 16 as core 16 on socket 0 00:04:45.588 EAL: Detected lcore 17 as core 17 on socket 0 00:04:45.588 EAL: Detected lcore 18 as core 18 on socket 0 00:04:45.588 EAL: Detected lcore 19 as core 19 on socket 0 00:04:45.588 EAL: Detected lcore 20 as core 20 on socket 0 00:04:45.588 EAL: Detected lcore 21 as core 21 on socket 0 00:04:45.588 EAL: Detected lcore 22 as core 22 on socket 0 00:04:45.588 EAL: Detected lcore 23 as core 23 on socket 0 00:04:45.588 EAL: Detected lcore 24 as core 24 on socket 0 00:04:45.588 EAL: Detected lcore 25 as core 25 on socket 0 00:04:45.588 EAL: Detected lcore 26 as core 26 on socket 0 00:04:45.588 EAL: Detected lcore 27 as core 27 on socket 0 00:04:45.588 EAL: Detected lcore 28 as core 28 on socket 0 00:04:45.588 EAL: Detected lcore 29 as core 29 on socket 0 00:04:45.588 EAL: Detected lcore 30 as core 30 on socket 0 00:04:45.588 EAL: Detected lcore 31 as core 31 on socket 0 00:04:45.588 EAL: Detected lcore 32 as core 32 on socket 0 00:04:45.588 EAL: Detected lcore 33 as core 33 on socket 0 00:04:45.588 EAL: Detected lcore 34 as core 34 on socket 0 00:04:45.588 EAL: Detected lcore 35 as core 35 on socket 0 00:04:45.588 EAL: Detected lcore 36 as core 0 on socket 1 00:04:45.588 EAL: Detected lcore 37 as core 1 on socket 1 00:04:45.588 EAL: Detected lcore 38 as core 2 on socket 1 00:04:45.588 EAL: Detected lcore 39 as core 3 on socket 1 00:04:45.588 EAL: Detected lcore 40 as core 4 on socket 1 00:04:45.588 EAL: Detected lcore 41 as core 5 on socket 1 00:04:45.588 EAL: Detected lcore 42 as core 6 on socket 1 00:04:45.588 EAL: Detected lcore 43 as core 7 on socket 1 00:04:45.588 EAL: Detected lcore 44 as core 8 on socket 1 00:04:45.588 EAL: Detected lcore 45 as core 9 on socket 1 00:04:45.588 EAL: Detected lcore 46 as core 10 on socket 1 00:04:45.588 EAL: Detected lcore 47 as core 11 on socket 1 00:04:45.588 EAL: Detected lcore 48 as core 12 on socket 1 00:04:45.588 EAL: Detected lcore 49 as core 13 on socket 1 00:04:45.588 EAL: Detected lcore 50 as core 14 on socket 1 00:04:45.588 EAL: Detected lcore 51 as core 15 on socket 1 00:04:45.588 EAL: Detected lcore 52 as core 16 on socket 1 00:04:45.588 EAL: Detected lcore 53 as core 17 on socket 1 00:04:45.588 EAL: Detected lcore 54 as core 18 on socket 1 00:04:45.588 EAL: Detected lcore 55 as core 19 on socket 1 00:04:45.588 EAL: Detected lcore 56 as core 20 on socket 1 00:04:45.588 EAL: Detected lcore 57 as core 21 on socket 1 00:04:45.588 EAL: Detected lcore 58 as core 22 on socket 1 00:04:45.588 EAL: Detected lcore 59 as core 23 on socket 1 00:04:45.588 EAL: Detected lcore 60 as core 24 on socket 1 00:04:45.588 EAL: Detected lcore 61 as core 25 on socket 1 00:04:45.588 EAL: Detected lcore 62 as core 26 on socket 1 00:04:45.588 EAL: Detected lcore 63 as core 27 on socket 1 00:04:45.588 EAL: Detected lcore 64 as core 28 on socket 1 00:04:45.588 EAL: Detected lcore 65 as core 29 on socket 1 00:04:45.588 EAL: Detected lcore 66 as core 30 on socket 1 00:04:45.588 EAL: Detected lcore 67 as core 31 on socket 1 00:04:45.588 EAL: Detected lcore 68 as core 32 on socket 1 00:04:45.588 EAL: Detected lcore 69 as core 33 on socket 1 00:04:45.588 EAL: Detected lcore 70 as core 34 on socket 1 00:04:45.588 EAL: Detected lcore 71 as core 35 on socket 1 00:04:45.588 EAL: Detected lcore 72 as core 0 on socket 0 00:04:45.588 EAL: Detected lcore 73 as core 1 on socket 0 00:04:45.588 EAL: Detected lcore 74 as core 2 on socket 0 00:04:45.588 EAL: Detected lcore 75 as core 3 on socket 0 00:04:45.588 EAL: Detected lcore 76 as core 4 on socket 0 00:04:45.588 EAL: Detected lcore 77 as core 5 on socket 0 00:04:45.588 EAL: Detected lcore 78 as core 6 on socket 0 00:04:45.588 EAL: Detected lcore 79 as core 7 on socket 0 00:04:45.588 EAL: Detected lcore 80 as core 8 on socket 0 00:04:45.588 EAL: Detected lcore 81 as core 9 on socket 0 00:04:45.588 EAL: Detected lcore 82 as core 10 on socket 0 00:04:45.588 EAL: Detected lcore 83 as core 11 on socket 0 00:04:45.588 EAL: Detected lcore 84 as core 12 on socket 0 00:04:45.588 EAL: Detected lcore 85 as core 13 on socket 0 00:04:45.588 EAL: Detected lcore 86 as core 14 on socket 0 00:04:45.588 EAL: Detected lcore 87 as core 15 on socket 0 00:04:45.588 EAL: Detected lcore 88 as core 16 on socket 0 00:04:45.588 EAL: Detected lcore 89 as core 17 on socket 0 00:04:45.588 EAL: Detected lcore 90 as core 18 on socket 0 00:04:45.588 EAL: Detected lcore 91 as core 19 on socket 0 00:04:45.588 EAL: Detected lcore 92 as core 20 on socket 0 00:04:45.588 EAL: Detected lcore 93 as core 21 on socket 0 00:04:45.588 EAL: Detected lcore 94 as core 22 on socket 0 00:04:45.588 EAL: Detected lcore 95 as core 23 on socket 0 00:04:45.588 EAL: Detected lcore 96 as core 24 on socket 0 00:04:45.588 EAL: Detected lcore 97 as core 25 on socket 0 00:04:45.588 EAL: Detected lcore 98 as core 26 on socket 0 00:04:45.588 EAL: Detected lcore 99 as core 27 on socket 0 00:04:45.588 EAL: Detected lcore 100 as core 28 on socket 0 00:04:45.588 EAL: Detected lcore 101 as core 29 on socket 0 00:04:45.588 EAL: Detected lcore 102 as core 30 on socket 0 00:04:45.588 EAL: Detected lcore 103 as core 31 on socket 0 00:04:45.588 EAL: Detected lcore 104 as core 32 on socket 0 00:04:45.588 EAL: Detected lcore 105 as core 33 on socket 0 00:04:45.588 EAL: Detected lcore 106 as core 34 on socket 0 00:04:45.588 EAL: Detected lcore 107 as core 35 on socket 0 00:04:45.588 EAL: Detected lcore 108 as core 0 on socket 1 00:04:45.588 EAL: Detected lcore 109 as core 1 on socket 1 00:04:45.588 EAL: Detected lcore 110 as core 2 on socket 1 00:04:45.588 EAL: Detected lcore 111 as core 3 on socket 1 00:04:45.588 EAL: Detected lcore 112 as core 4 on socket 1 00:04:45.588 EAL: Detected lcore 113 as core 5 on socket 1 00:04:45.588 EAL: Detected lcore 114 as core 6 on socket 1 00:04:45.588 EAL: Detected lcore 115 as core 7 on socket 1 00:04:45.588 EAL: Detected lcore 116 as core 8 on socket 1 00:04:45.588 EAL: Detected lcore 117 as core 9 on socket 1 00:04:45.588 EAL: Detected lcore 118 as core 10 on socket 1 00:04:45.588 EAL: Detected lcore 119 as core 11 on socket 1 00:04:45.588 EAL: Detected lcore 120 as core 12 on socket 1 00:04:45.588 EAL: Detected lcore 121 as core 13 on socket 1 00:04:45.588 EAL: Detected lcore 122 as core 14 on socket 1 00:04:45.588 EAL: Detected lcore 123 as core 15 on socket 1 00:04:45.588 EAL: Detected lcore 124 as core 16 on socket 1 00:04:45.588 EAL: Detected lcore 125 as core 17 on socket 1 00:04:45.588 EAL: Detected lcore 126 as core 18 on socket 1 00:04:45.588 EAL: Detected lcore 127 as core 19 on socket 1 00:04:45.588 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:45.588 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:45.588 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:45.588 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:45.588 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:45.588 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:45.588 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:45.588 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:45.588 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:45.588 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:45.588 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:45.588 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:45.588 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:45.588 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:45.588 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:45.588 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:45.588 EAL: Maximum logical cores by configuration: 128 00:04:45.588 EAL: Detected CPU lcores: 128 00:04:45.588 EAL: Detected NUMA nodes: 2 00:04:45.588 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:45.588 EAL: Detected shared linkage of DPDK 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:45.588 EAL: Registered [vdev] bus. 00:04:45.588 EAL: bus.vdev log level changed from disabled to notice 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:45.588 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:45.588 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:45.588 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:45.588 EAL: No shared files mode enabled, IPC will be disabled 00:04:45.588 EAL: No shared files mode enabled, IPC is disabled 00:04:45.588 EAL: Bus pci wants IOVA as 'DC' 00:04:45.588 EAL: Bus vdev wants IOVA as 'DC' 00:04:45.588 EAL: Buses did not request a specific IOVA mode. 00:04:45.588 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:45.588 EAL: Selected IOVA mode 'VA' 00:04:45.588 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.588 EAL: Probing VFIO support... 00:04:45.589 EAL: IOMMU type 1 (Type 1) is supported 00:04:45.589 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:45.589 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:45.589 EAL: VFIO support initialized 00:04:45.589 EAL: Ask a virtual area of 0x2e000 bytes 00:04:45.589 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:45.589 EAL: Setting up physically contiguous memory... 00:04:45.589 EAL: Setting maximum number of open files to 524288 00:04:45.589 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:45.589 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:45.589 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:45.589 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:45.589 EAL: Ask a virtual area of 0x61000 bytes 00:04:45.589 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:45.589 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:45.589 EAL: Ask a virtual area of 0x400000000 bytes 00:04:45.589 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:45.589 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:45.589 EAL: Hugepages will be freed exactly as allocated. 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: TSC frequency is ~2400000 KHz 00:04:45.589 EAL: Main lcore 0 is ready (tid=7f5ca6af7a00;cpuset=[0]) 00:04:45.589 EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.589 EAL: Restoring previous memory policy: 0 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was expanded by 2MB 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:45.589 EAL: Mem event callback 'spdk:(nil)' registered 00:04:45.589 00:04:45.589 00:04:45.589 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.589 http://cunit.sourceforge.net/ 00:04:45.589 00:04:45.589 00:04:45.589 Suite: components_suite 00:04:45.589 Test: vtophys_malloc_test ...passed 00:04:45.589 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.589 EAL: Restoring previous memory policy: 4 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was expanded by 4MB 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was shrunk by 4MB 00:04:45.589 EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.589 EAL: Restoring previous memory policy: 4 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was expanded by 6MB 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was shrunk by 6MB 00:04:45.589 EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.589 EAL: Restoring previous memory policy: 4 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was expanded by 10MB 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was shrunk by 10MB 00:04:45.589 EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.589 EAL: Restoring previous memory policy: 4 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was expanded by 18MB 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was shrunk by 18MB 00:04:45.589 EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.589 EAL: Restoring previous memory policy: 4 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was expanded by 34MB 00:04:45.589 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.589 EAL: request: mp_malloc_sync 00:04:45.589 EAL: No shared files mode enabled, IPC is disabled 00:04:45.589 EAL: Heap on socket 0 was shrunk by 34MB 00:04:45.589 EAL: Trying to obtain current memory policy. 00:04:45.589 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.851 EAL: Restoring previous memory policy: 4 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was expanded by 66MB 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was shrunk by 66MB 00:04:45.851 EAL: Trying to obtain current memory policy. 00:04:45.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.851 EAL: Restoring previous memory policy: 4 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was expanded by 130MB 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was shrunk by 130MB 00:04:45.851 EAL: Trying to obtain current memory policy. 00:04:45.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.851 EAL: Restoring previous memory policy: 4 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was expanded by 258MB 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was shrunk by 258MB 00:04:45.851 EAL: Trying to obtain current memory policy. 00:04:45.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.851 EAL: Restoring previous memory policy: 4 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.851 EAL: request: mp_malloc_sync 00:04:45.851 EAL: No shared files mode enabled, IPC is disabled 00:04:45.851 EAL: Heap on socket 0 was expanded by 514MB 00:04:45.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.113 EAL: request: mp_malloc_sync 00:04:46.113 EAL: No shared files mode enabled, IPC is disabled 00:04:46.113 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.113 EAL: Trying to obtain current memory policy. 00:04:46.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.113 EAL: Restoring previous memory policy: 4 00:04:46.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.113 EAL: request: mp_malloc_sync 00:04:46.113 EAL: No shared files mode enabled, IPC is disabled 00:04:46.113 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.375 EAL: request: mp_malloc_sync 00:04:46.375 EAL: No shared files mode enabled, IPC is disabled 00:04:46.375 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:46.375 passed 00:04:46.375 00:04:46.375 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.375 suites 1 1 n/a 0 0 00:04:46.375 tests 2 2 2 0 0 00:04:46.375 asserts 497 497 497 0 n/a 00:04:46.375 00:04:46.375 Elapsed time = 0.642 seconds 00:04:46.375 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.375 EAL: request: mp_malloc_sync 00:04:46.375 EAL: No shared files mode enabled, IPC is disabled 00:04:46.375 EAL: Heap on socket 0 was shrunk by 2MB 00:04:46.375 EAL: No shared files mode enabled, IPC is disabled 00:04:46.375 EAL: No shared files mode enabled, IPC is disabled 00:04:46.375 EAL: No shared files mode enabled, IPC is disabled 00:04:46.375 00:04:46.375 real 0m0.760s 00:04:46.375 user 0m0.394s 00:04:46.375 sys 0m0.338s 00:04:46.375 09:58:32 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.375 09:58:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 ************************************ 00:04:46.375 END TEST env_vtophys 00:04:46.375 ************************************ 00:04:46.375 09:58:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:46.375 09:58:32 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:46.375 09:58:32 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.375 09:58:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.375 ************************************ 00:04:46.375 START TEST env_pci 00:04:46.375 ************************************ 00:04:46.375 09:58:32 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:46.375 00:04:46.375 00:04:46.375 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.375 http://cunit.sourceforge.net/ 00:04:46.375 00:04:46.375 00:04:46.375 Suite: pci 00:04:46.375 Test: pci_hook ...[2024-05-15 09:58:32.136039] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2583451 has claimed it 00:04:46.375 EAL: Cannot find device (10000:00:01.0) 00:04:46.375 EAL: Failed to attach device on primary process 00:04:46.375 passed 00:04:46.375 00:04:46.375 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.375 suites 1 1 n/a 0 0 00:04:46.375 tests 1 1 1 0 0 00:04:46.375 asserts 25 25 25 0 n/a 00:04:46.375 00:04:46.375 Elapsed time = 0.031 seconds 00:04:46.636 00:04:46.636 real 0m0.050s 00:04:46.636 user 0m0.014s 00:04:46.636 sys 0m0.035s 00:04:46.636 09:58:32 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.636 09:58:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:46.636 ************************************ 00:04:46.636 END TEST env_pci 00:04:46.636 ************************************ 00:04:46.636 09:58:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:46.636 09:58:32 env -- env/env.sh@15 -- # uname 00:04:46.636 09:58:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:46.636 09:58:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:46.636 09:58:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.636 09:58:32 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:04:46.636 09:58:32 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:46.636 09:58:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.636 ************************************ 00:04:46.636 START TEST env_dpdk_post_init 00:04:46.636 ************************************ 00:04:46.637 09:58:32 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.637 EAL: Detected CPU lcores: 128 00:04:46.637 EAL: Detected NUMA nodes: 2 00:04:46.637 EAL: Detected shared linkage of DPDK 00:04:46.637 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.637 EAL: Selected IOVA mode 'VA' 00:04:46.637 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.637 EAL: VFIO support initialized 00:04:46.637 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:46.637 EAL: Using IOMMU type 1 (Type 1) 00:04:46.898 EAL: Ignore mapping IO port bar(1) 00:04:46.898 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:47.159 EAL: Ignore mapping IO port bar(1) 00:04:47.159 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:47.159 EAL: Ignore mapping IO port bar(1) 00:04:47.421 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:47.421 EAL: Ignore mapping IO port bar(1) 00:04:47.683 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:47.683 EAL: Ignore mapping IO port bar(1) 00:04:47.945 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:47.945 EAL: Ignore mapping IO port bar(1) 00:04:47.945 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:48.207 EAL: Ignore mapping IO port bar(1) 00:04:48.207 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:48.470 EAL: Ignore mapping IO port bar(1) 00:04:48.470 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:48.732 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:48.732 EAL: Ignore mapping IO port bar(1) 00:04:48.995 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:48.995 EAL: Ignore mapping IO port bar(1) 00:04:49.257 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:49.257 EAL: Ignore mapping IO port bar(1) 00:04:49.519 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:49.519 EAL: Ignore mapping IO port bar(1) 00:04:49.519 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:49.781 EAL: Ignore mapping IO port bar(1) 00:04:49.781 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:50.043 EAL: Ignore mapping IO port bar(1) 00:04:50.043 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:50.305 EAL: Ignore mapping IO port bar(1) 00:04:50.305 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:50.305 EAL: Ignore mapping IO port bar(1) 00:04:50.567 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:50.567 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:50.567 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:50.567 Starting DPDK initialization... 00:04:50.567 Starting SPDK post initialization... 00:04:50.567 SPDK NVMe probe 00:04:50.567 Attaching to 0000:65:00.0 00:04:50.567 Attached to 0000:65:00.0 00:04:50.567 Cleaning up... 00:04:52.488 00:04:52.488 real 0m5.716s 00:04:52.488 user 0m0.173s 00:04:52.488 sys 0m0.087s 00:04:52.488 09:58:37 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:52.488 09:58:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.488 ************************************ 00:04:52.488 END TEST env_dpdk_post_init 00:04:52.488 ************************************ 00:04:52.488 09:58:38 env -- env/env.sh@26 -- # uname 00:04:52.488 09:58:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:52.488 09:58:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.488 09:58:38 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:52.488 09:58:38 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:52.488 09:58:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.488 ************************************ 00:04:52.488 START TEST env_mem_callbacks 00:04:52.488 ************************************ 00:04:52.488 09:58:38 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:52.488 EAL: Detected CPU lcores: 128 00:04:52.488 EAL: Detected NUMA nodes: 2 00:04:52.489 EAL: Detected shared linkage of DPDK 00:04:52.489 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:52.489 EAL: Selected IOVA mode 'VA' 00:04:52.489 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.489 EAL: VFIO support initialized 00:04:52.489 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:52.489 00:04:52.489 00:04:52.489 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.489 http://cunit.sourceforge.net/ 00:04:52.489 00:04:52.489 00:04:52.489 Suite: memory 00:04:52.489 Test: test ... 00:04:52.489 register 0x200000200000 2097152 00:04:52.489 malloc 3145728 00:04:52.489 register 0x200000400000 4194304 00:04:52.489 buf 0x200000500000 len 3145728 PASSED 00:04:52.489 malloc 64 00:04:52.489 buf 0x2000004fff40 len 64 PASSED 00:04:52.489 malloc 4194304 00:04:52.489 register 0x200000800000 6291456 00:04:52.489 buf 0x200000a00000 len 4194304 PASSED 00:04:52.489 free 0x200000500000 3145728 00:04:52.489 free 0x2000004fff40 64 00:04:52.489 unregister 0x200000400000 4194304 PASSED 00:04:52.489 free 0x200000a00000 4194304 00:04:52.489 unregister 0x200000800000 6291456 PASSED 00:04:52.489 malloc 8388608 00:04:52.489 register 0x200000400000 10485760 00:04:52.489 buf 0x200000600000 len 8388608 PASSED 00:04:52.489 free 0x200000600000 8388608 00:04:52.489 unregister 0x200000400000 10485760 PASSED 00:04:52.489 passed 00:04:52.489 00:04:52.489 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.489 suites 1 1 n/a 0 0 00:04:52.489 tests 1 1 1 0 0 00:04:52.489 asserts 15 15 15 0 n/a 00:04:52.489 00:04:52.489 Elapsed time = 0.007 seconds 00:04:52.489 00:04:52.489 real 0m0.063s 00:04:52.489 user 0m0.021s 00:04:52.489 sys 0m0.042s 00:04:52.489 09:58:38 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:52.489 09:58:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:52.489 ************************************ 00:04:52.489 END TEST env_mem_callbacks 00:04:52.489 ************************************ 00:04:52.489 00:04:52.489 real 0m7.337s 00:04:52.489 user 0m1.003s 00:04:52.489 sys 0m0.858s 00:04:52.489 09:58:38 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:52.489 09:58:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:52.489 ************************************ 00:04:52.489 END TEST env 00:04:52.489 ************************************ 00:04:52.489 09:58:38 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.489 09:58:38 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:52.489 09:58:38 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:52.489 09:58:38 -- common/autotest_common.sh@10 -- # set +x 00:04:52.489 ************************************ 00:04:52.489 START TEST rpc 00:04:52.489 ************************************ 00:04:52.489 09:58:38 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:52.753 * Looking for test storage... 00:04:52.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.753 09:58:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2584896 00:04:52.753 09:58:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.753 09:58:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:52.753 09:58:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2584896 00:04:52.753 09:58:38 rpc -- common/autotest_common.sh@828 -- # '[' -z 2584896 ']' 00:04:52.753 09:58:38 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.753 09:58:38 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:04:52.753 09:58:38 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.753 09:58:38 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:04:52.753 09:58:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.753 [2024-05-15 09:58:38.418844] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:52.753 [2024-05-15 09:58:38.418907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584896 ] 00:04:52.753 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.753 [2024-05-15 09:58:38.483990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.753 [2024-05-15 09:58:38.522144] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.753 [2024-05-15 09:58:38.522198] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2584896' to capture a snapshot of events at runtime. 00:04:52.753 [2024-05-15 09:58:38.522206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:52.753 [2024-05-15 09:58:38.522213] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:52.753 [2024-05-15 09:58:38.522219] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2584896 for offline analysis/debug. 00:04:52.753 [2024-05-15 09:58:38.522249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.700 09:58:39 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:04:53.700 09:58:39 rpc -- common/autotest_common.sh@861 -- # return 0 00:04:53.700 09:58:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:53.700 09:58:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:53.700 09:58:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:53.700 09:58:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:53.700 09:58:39 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.700 09:58:39 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.700 09:58:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.700 ************************************ 00:04:53.700 START TEST rpc_integrity 00:04:53.700 ************************************ 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.700 { 00:04:53.700 "name": "Malloc0", 00:04:53.700 "aliases": [ 00:04:53.700 "d356411f-bb16-42d3-8b91-901343a16262" 00:04:53.700 ], 00:04:53.700 "product_name": "Malloc disk", 00:04:53.700 "block_size": 512, 00:04:53.700 "num_blocks": 16384, 00:04:53.700 "uuid": "d356411f-bb16-42d3-8b91-901343a16262", 00:04:53.700 "assigned_rate_limits": { 00:04:53.700 "rw_ios_per_sec": 0, 00:04:53.700 "rw_mbytes_per_sec": 0, 00:04:53.700 "r_mbytes_per_sec": 0, 00:04:53.700 "w_mbytes_per_sec": 0 00:04:53.700 }, 00:04:53.700 "claimed": false, 00:04:53.700 "zoned": false, 00:04:53.700 "supported_io_types": { 00:04:53.700 "read": true, 00:04:53.700 "write": true, 00:04:53.700 "unmap": true, 00:04:53.700 "write_zeroes": true, 00:04:53.700 "flush": true, 00:04:53.700 "reset": true, 00:04:53.700 "compare": false, 00:04:53.700 "compare_and_write": false, 00:04:53.700 "abort": true, 00:04:53.700 "nvme_admin": false, 00:04:53.700 "nvme_io": false 00:04:53.700 }, 00:04:53.700 "memory_domains": [ 00:04:53.700 { 00:04:53.700 "dma_device_id": "system", 00:04:53.700 "dma_device_type": 1 00:04:53.700 }, 00:04:53.700 { 00:04:53.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.700 "dma_device_type": 2 00:04:53.700 } 00:04:53.700 ], 00:04:53.700 "driver_specific": {} 00:04:53.700 } 00:04:53.700 ]' 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.700 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.700 [2024-05-15 09:58:39.381579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:53.700 [2024-05-15 09:58:39.381611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.700 [2024-05-15 09:58:39.381624] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdc6450 00:04:53.700 [2024-05-15 09:58:39.381631] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.700 [2024-05-15 09:58:39.382958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.700 [2024-05-15 09:58:39.382980] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.700 Passthru0 00:04:53.700 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.701 { 00:04:53.701 "name": "Malloc0", 00:04:53.701 "aliases": [ 00:04:53.701 "d356411f-bb16-42d3-8b91-901343a16262" 00:04:53.701 ], 00:04:53.701 "product_name": "Malloc disk", 00:04:53.701 "block_size": 512, 00:04:53.701 "num_blocks": 16384, 00:04:53.701 "uuid": "d356411f-bb16-42d3-8b91-901343a16262", 00:04:53.701 "assigned_rate_limits": { 00:04:53.701 "rw_ios_per_sec": 0, 00:04:53.701 "rw_mbytes_per_sec": 0, 00:04:53.701 "r_mbytes_per_sec": 0, 00:04:53.701 "w_mbytes_per_sec": 0 00:04:53.701 }, 00:04:53.701 "claimed": true, 00:04:53.701 "claim_type": "exclusive_write", 00:04:53.701 "zoned": false, 00:04:53.701 "supported_io_types": { 00:04:53.701 "read": true, 00:04:53.701 "write": true, 00:04:53.701 "unmap": true, 00:04:53.701 "write_zeroes": true, 00:04:53.701 "flush": true, 00:04:53.701 "reset": true, 00:04:53.701 "compare": false, 00:04:53.701 "compare_and_write": false, 00:04:53.701 "abort": true, 00:04:53.701 "nvme_admin": false, 00:04:53.701 "nvme_io": false 00:04:53.701 }, 00:04:53.701 "memory_domains": [ 00:04:53.701 { 00:04:53.701 "dma_device_id": "system", 00:04:53.701 "dma_device_type": 1 00:04:53.701 }, 00:04:53.701 { 00:04:53.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.701 "dma_device_type": 2 00:04:53.701 } 00:04:53.701 ], 00:04:53.701 "driver_specific": {} 00:04:53.701 }, 00:04:53.701 { 00:04:53.701 "name": "Passthru0", 00:04:53.701 "aliases": [ 00:04:53.701 "4665bac1-7472-5e96-af93-519caf7ef52f" 00:04:53.701 ], 00:04:53.701 "product_name": "passthru", 00:04:53.701 "block_size": 512, 00:04:53.701 "num_blocks": 16384, 00:04:53.701 "uuid": "4665bac1-7472-5e96-af93-519caf7ef52f", 00:04:53.701 "assigned_rate_limits": { 00:04:53.701 "rw_ios_per_sec": 0, 00:04:53.701 "rw_mbytes_per_sec": 0, 00:04:53.701 "r_mbytes_per_sec": 0, 00:04:53.701 "w_mbytes_per_sec": 0 00:04:53.701 }, 00:04:53.701 "claimed": false, 00:04:53.701 "zoned": false, 00:04:53.701 "supported_io_types": { 00:04:53.701 "read": true, 00:04:53.701 "write": true, 00:04:53.701 "unmap": true, 00:04:53.701 "write_zeroes": true, 00:04:53.701 "flush": true, 00:04:53.701 "reset": true, 00:04:53.701 "compare": false, 00:04:53.701 "compare_and_write": false, 00:04:53.701 "abort": true, 00:04:53.701 "nvme_admin": false, 00:04:53.701 "nvme_io": false 00:04:53.701 }, 00:04:53.701 "memory_domains": [ 00:04:53.701 { 00:04:53.701 "dma_device_id": "system", 00:04:53.701 "dma_device_type": 1 00:04:53.701 }, 00:04:53.701 { 00:04:53.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.701 "dma_device_type": 2 00:04:53.701 } 00:04:53.701 ], 00:04:53.701 "driver_specific": { 00:04:53.701 "passthru": { 00:04:53.701 "name": "Passthru0", 00:04:53.701 "base_bdev_name": "Malloc0" 00:04:53.701 } 00:04:53.701 } 00:04:53.701 } 00:04:53.701 ]' 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.701 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.701 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.963 09:58:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.963 00:04:53.963 real 0m0.291s 00:04:53.963 user 0m0.198s 00:04:53.963 sys 0m0.030s 00:04:53.963 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:53.963 09:58:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 ************************************ 00:04:53.963 END TEST rpc_integrity 00:04:53.963 ************************************ 00:04:53.963 09:58:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:53.963 09:58:39 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:53.963 09:58:39 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:53.963 09:58:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 ************************************ 00:04:53.963 START TEST rpc_plugins 00:04:53.963 ************************************ 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:04:53.963 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.963 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:53.963 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.963 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:53.963 { 00:04:53.963 "name": "Malloc1", 00:04:53.963 "aliases": [ 00:04:53.963 "11f44043-bed6-4c51-9682-452b81cd686b" 00:04:53.963 ], 00:04:53.963 "product_name": "Malloc disk", 00:04:53.963 "block_size": 4096, 00:04:53.963 "num_blocks": 256, 00:04:53.963 "uuid": "11f44043-bed6-4c51-9682-452b81cd686b", 00:04:53.963 "assigned_rate_limits": { 00:04:53.963 "rw_ios_per_sec": 0, 00:04:53.963 "rw_mbytes_per_sec": 0, 00:04:53.963 "r_mbytes_per_sec": 0, 00:04:53.963 "w_mbytes_per_sec": 0 00:04:53.963 }, 00:04:53.963 "claimed": false, 00:04:53.963 "zoned": false, 00:04:53.963 "supported_io_types": { 00:04:53.963 "read": true, 00:04:53.963 "write": true, 00:04:53.963 "unmap": true, 00:04:53.963 "write_zeroes": true, 00:04:53.963 "flush": true, 00:04:53.963 "reset": true, 00:04:53.964 "compare": false, 00:04:53.964 "compare_and_write": false, 00:04:53.964 "abort": true, 00:04:53.964 "nvme_admin": false, 00:04:53.964 "nvme_io": false 00:04:53.964 }, 00:04:53.964 "memory_domains": [ 00:04:53.964 { 00:04:53.964 "dma_device_id": "system", 00:04:53.964 "dma_device_type": 1 00:04:53.964 }, 00:04:53.964 { 00:04:53.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.964 "dma_device_type": 2 00:04:53.964 } 00:04:53.964 ], 00:04:53.964 "driver_specific": {} 00:04:53.964 } 00:04:53.964 ]' 00:04:53.964 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:53.964 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:53.964 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:53.964 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.964 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:53.964 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.964 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:53.964 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:53.964 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:53.964 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:53.964 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:53.964 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:54.226 09:58:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:54.226 00:04:54.226 real 0m0.151s 00:04:54.226 user 0m0.097s 00:04:54.226 sys 0m0.018s 00:04:54.226 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.226 09:58:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 ************************************ 00:04:54.226 END TEST rpc_plugins 00:04:54.226 ************************************ 00:04:54.226 09:58:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:54.226 09:58:39 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:54.226 09:58:39 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:54.226 09:58:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 ************************************ 00:04:54.226 START TEST rpc_trace_cmd_test 00:04:54.226 ************************************ 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:54.226 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2584896", 00:04:54.226 "tpoint_group_mask": "0x8", 00:04:54.226 "iscsi_conn": { 00:04:54.226 "mask": "0x2", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "scsi": { 00:04:54.226 "mask": "0x4", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "bdev": { 00:04:54.226 "mask": "0x8", 00:04:54.226 "tpoint_mask": "0xffffffffffffffff" 00:04:54.226 }, 00:04:54.226 "nvmf_rdma": { 00:04:54.226 "mask": "0x10", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "nvmf_tcp": { 00:04:54.226 "mask": "0x20", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "ftl": { 00:04:54.226 "mask": "0x40", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "blobfs": { 00:04:54.226 "mask": "0x80", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "dsa": { 00:04:54.226 "mask": "0x200", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "thread": { 00:04:54.226 "mask": "0x400", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "nvme_pcie": { 00:04:54.226 "mask": "0x800", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "iaa": { 00:04:54.226 "mask": "0x1000", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "nvme_tcp": { 00:04:54.226 "mask": "0x2000", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "bdev_nvme": { 00:04:54.226 "mask": "0x4000", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 }, 00:04:54.226 "sock": { 00:04:54.226 "mask": "0x8000", 00:04:54.226 "tpoint_mask": "0x0" 00:04:54.226 } 00:04:54.226 }' 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:54.226 09:58:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:54.533 09:58:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:54.533 09:58:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:54.533 09:58:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:54.533 00:04:54.533 real 0m0.249s 00:04:54.534 user 0m0.217s 00:04:54.534 sys 0m0.024s 00:04:54.534 09:58:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.534 09:58:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 ************************************ 00:04:54.534 END TEST rpc_trace_cmd_test 00:04:54.534 ************************************ 00:04:54.534 09:58:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:54.534 09:58:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:54.534 09:58:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:54.534 09:58:40 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:54.534 09:58:40 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:54.534 09:58:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 ************************************ 00:04:54.534 START TEST rpc_daemon_integrity 00:04:54.534 ************************************ 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:54.534 { 00:04:54.534 "name": "Malloc2", 00:04:54.534 "aliases": [ 00:04:54.534 "3afc03b1-1e69-4fe4-8e1c-588d9db4a899" 00:04:54.534 ], 00:04:54.534 "product_name": "Malloc disk", 00:04:54.534 "block_size": 512, 00:04:54.534 "num_blocks": 16384, 00:04:54.534 "uuid": "3afc03b1-1e69-4fe4-8e1c-588d9db4a899", 00:04:54.534 "assigned_rate_limits": { 00:04:54.534 "rw_ios_per_sec": 0, 00:04:54.534 "rw_mbytes_per_sec": 0, 00:04:54.534 "r_mbytes_per_sec": 0, 00:04:54.534 "w_mbytes_per_sec": 0 00:04:54.534 }, 00:04:54.534 "claimed": false, 00:04:54.534 "zoned": false, 00:04:54.534 "supported_io_types": { 00:04:54.534 "read": true, 00:04:54.534 "write": true, 00:04:54.534 "unmap": true, 00:04:54.534 "write_zeroes": true, 00:04:54.534 "flush": true, 00:04:54.534 "reset": true, 00:04:54.534 "compare": false, 00:04:54.534 "compare_and_write": false, 00:04:54.534 "abort": true, 00:04:54.534 "nvme_admin": false, 00:04:54.534 "nvme_io": false 00:04:54.534 }, 00:04:54.534 "memory_domains": [ 00:04:54.534 { 00:04:54.534 "dma_device_id": "system", 00:04:54.534 "dma_device_type": 1 00:04:54.534 }, 00:04:54.534 { 00:04:54.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.534 "dma_device_type": 2 00:04:54.534 } 00:04:54.534 ], 00:04:54.534 "driver_specific": {} 00:04:54.534 } 00:04:54.534 ]' 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.534 [2024-05-15 09:58:40.316125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:54.534 [2024-05-15 09:58:40.316159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:54.534 [2024-05-15 09:58:40.316173] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdc6e50 00:04:54.534 [2024-05-15 09:58:40.316180] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:54.534 [2024-05-15 09:58:40.317402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:54.534 [2024-05-15 09:58:40.317421] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:54.534 Passthru0 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.534 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.796 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.796 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:54.796 { 00:04:54.796 "name": "Malloc2", 00:04:54.796 "aliases": [ 00:04:54.796 "3afc03b1-1e69-4fe4-8e1c-588d9db4a899" 00:04:54.796 ], 00:04:54.796 "product_name": "Malloc disk", 00:04:54.796 "block_size": 512, 00:04:54.796 "num_blocks": 16384, 00:04:54.796 "uuid": "3afc03b1-1e69-4fe4-8e1c-588d9db4a899", 00:04:54.796 "assigned_rate_limits": { 00:04:54.796 "rw_ios_per_sec": 0, 00:04:54.796 "rw_mbytes_per_sec": 0, 00:04:54.796 "r_mbytes_per_sec": 0, 00:04:54.796 "w_mbytes_per_sec": 0 00:04:54.796 }, 00:04:54.796 "claimed": true, 00:04:54.796 "claim_type": "exclusive_write", 00:04:54.796 "zoned": false, 00:04:54.796 "supported_io_types": { 00:04:54.796 "read": true, 00:04:54.796 "write": true, 00:04:54.796 "unmap": true, 00:04:54.796 "write_zeroes": true, 00:04:54.796 "flush": true, 00:04:54.796 "reset": true, 00:04:54.796 "compare": false, 00:04:54.796 "compare_and_write": false, 00:04:54.796 "abort": true, 00:04:54.796 "nvme_admin": false, 00:04:54.796 "nvme_io": false 00:04:54.796 }, 00:04:54.796 "memory_domains": [ 00:04:54.796 { 00:04:54.796 "dma_device_id": "system", 00:04:54.796 "dma_device_type": 1 00:04:54.796 }, 00:04:54.796 { 00:04:54.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.796 "dma_device_type": 2 00:04:54.796 } 00:04:54.796 ], 00:04:54.796 "driver_specific": {} 00:04:54.796 }, 00:04:54.796 { 00:04:54.796 "name": "Passthru0", 00:04:54.796 "aliases": [ 00:04:54.796 "d3b42e93-3473-584a-8cae-a3536f7013f4" 00:04:54.796 ], 00:04:54.796 "product_name": "passthru", 00:04:54.796 "block_size": 512, 00:04:54.796 "num_blocks": 16384, 00:04:54.796 "uuid": "d3b42e93-3473-584a-8cae-a3536f7013f4", 00:04:54.796 "assigned_rate_limits": { 00:04:54.796 "rw_ios_per_sec": 0, 00:04:54.796 "rw_mbytes_per_sec": 0, 00:04:54.796 "r_mbytes_per_sec": 0, 00:04:54.796 "w_mbytes_per_sec": 0 00:04:54.796 }, 00:04:54.796 "claimed": false, 00:04:54.796 "zoned": false, 00:04:54.797 "supported_io_types": { 00:04:54.797 "read": true, 00:04:54.797 "write": true, 00:04:54.797 "unmap": true, 00:04:54.797 "write_zeroes": true, 00:04:54.797 "flush": true, 00:04:54.797 "reset": true, 00:04:54.797 "compare": false, 00:04:54.797 "compare_and_write": false, 00:04:54.797 "abort": true, 00:04:54.797 "nvme_admin": false, 00:04:54.797 "nvme_io": false 00:04:54.797 }, 00:04:54.797 "memory_domains": [ 00:04:54.797 { 00:04:54.797 "dma_device_id": "system", 00:04:54.797 "dma_device_type": 1 00:04:54.797 }, 00:04:54.797 { 00:04:54.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:54.797 "dma_device_type": 2 00:04:54.797 } 00:04:54.797 ], 00:04:54.797 "driver_specific": { 00:04:54.797 "passthru": { 00:04:54.797 "name": "Passthru0", 00:04:54.797 "base_bdev_name": "Malloc2" 00:04:54.797 } 00:04:54.797 } 00:04:54.797 } 00:04:54.797 ]' 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:54.797 00:04:54.797 real 0m0.286s 00:04:54.797 user 0m0.187s 00:04:54.797 sys 0m0.034s 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:54.797 09:58:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:54.797 ************************************ 00:04:54.797 END TEST rpc_daemon_integrity 00:04:54.797 ************************************ 00:04:54.797 09:58:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:54.797 09:58:40 rpc -- rpc/rpc.sh@84 -- # killprocess 2584896 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@947 -- # '[' -z 2584896 ']' 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@951 -- # kill -0 2584896 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@952 -- # uname 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2584896 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2584896' 00:04:54.797 killing process with pid 2584896 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@966 -- # kill 2584896 00:04:54.797 09:58:40 rpc -- common/autotest_common.sh@971 -- # wait 2584896 00:04:55.059 00:04:55.059 real 0m2.488s 00:04:55.059 user 0m3.300s 00:04:55.059 sys 0m0.683s 00:04:55.059 09:58:40 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:55.059 09:58:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.059 ************************************ 00:04:55.059 END TEST rpc 00:04:55.059 ************************************ 00:04:55.059 09:58:40 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:55.059 09:58:40 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:55.059 09:58:40 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:55.059 09:58:40 -- common/autotest_common.sh@10 -- # set +x 00:04:55.059 ************************************ 00:04:55.059 START TEST skip_rpc 00:04:55.059 ************************************ 00:04:55.059 09:58:40 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:55.322 * Looking for test storage... 00:04:55.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.322 09:58:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:55.322 09:58:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:55.322 09:58:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:55.322 09:58:40 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:55.322 09:58:40 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:55.322 09:58:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.322 ************************************ 00:04:55.322 START TEST skip_rpc 00:04:55.322 ************************************ 00:04:55.322 09:58:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:04:55.322 09:58:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2585449 00:04:55.322 09:58:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.322 09:58:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:55.322 09:58:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:55.322 [2024-05-15 09:58:41.017730] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:04:55.322 [2024-05-15 09:58:41.017779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585449 ] 00:04:55.322 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.322 [2024-05-15 09:58:41.076989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.322 [2024-05-15 09:58:41.108543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2585449 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 2585449 ']' 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 2585449 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:00.623 09:58:45 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2585449 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2585449' 00:05:00.623 killing process with pid 2585449 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 2585449 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 2585449 00:05:00.623 00:05:00.623 real 0m5.263s 00:05:00.623 user 0m5.072s 00:05:00.623 sys 0m0.224s 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:00.623 09:58:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 ************************************ 00:05:00.623 END TEST skip_rpc 00:05:00.623 ************************************ 00:05:00.623 09:58:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:00.623 09:58:46 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:00.623 09:58:46 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:00.623 09:58:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 ************************************ 00:05:00.623 START TEST skip_rpc_with_json 00:05:00.623 ************************************ 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2586629 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2586629 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 2586629 ']' 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:00.623 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.623 [2024-05-15 09:58:46.367223] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:00.623 [2024-05-15 09:58:46.367276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586629 ] 00:05:00.623 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.885 [2024-05-15 09:58:46.426973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.885 [2024-05-15 09:58:46.457782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.885 [2024-05-15 09:58:46.625388] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:00.885 request: 00:05:00.885 { 00:05:00.885 "trtype": "tcp", 00:05:00.885 "method": "nvmf_get_transports", 00:05:00.885 "req_id": 1 00:05:00.885 } 00:05:00.885 Got JSON-RPC error response 00:05:00.885 response: 00:05:00.885 { 00:05:00.885 "code": -19, 00:05:00.885 "message": "No such device" 00:05:00.885 } 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.885 [2024-05-15 09:58:46.637487] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:00.885 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:01.147 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.147 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:01.147 { 00:05:01.147 "subsystems": [ 00:05:01.147 { 00:05:01.147 "subsystem": "vfio_user_target", 00:05:01.147 "config": null 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "keyring", 00:05:01.147 "config": [] 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "iobuf", 00:05:01.147 "config": [ 00:05:01.147 { 00:05:01.147 "method": "iobuf_set_options", 00:05:01.147 "params": { 00:05:01.147 "small_pool_count": 8192, 00:05:01.147 "large_pool_count": 1024, 00:05:01.147 "small_bufsize": 8192, 00:05:01.147 "large_bufsize": 135168 00:05:01.147 } 00:05:01.147 } 00:05:01.147 ] 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "sock", 00:05:01.147 "config": [ 00:05:01.147 { 00:05:01.147 "method": "sock_impl_set_options", 00:05:01.147 "params": { 00:05:01.147 "impl_name": "posix", 00:05:01.147 "recv_buf_size": 2097152, 00:05:01.147 "send_buf_size": 2097152, 00:05:01.147 "enable_recv_pipe": true, 00:05:01.147 "enable_quickack": false, 00:05:01.147 "enable_placement_id": 0, 00:05:01.147 "enable_zerocopy_send_server": true, 00:05:01.147 "enable_zerocopy_send_client": false, 00:05:01.147 "zerocopy_threshold": 0, 00:05:01.147 "tls_version": 0, 00:05:01.147 "enable_ktls": false 00:05:01.147 } 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "method": "sock_impl_set_options", 00:05:01.147 "params": { 00:05:01.147 "impl_name": "ssl", 00:05:01.147 "recv_buf_size": 4096, 00:05:01.147 "send_buf_size": 4096, 00:05:01.147 "enable_recv_pipe": true, 00:05:01.147 "enable_quickack": false, 00:05:01.147 "enable_placement_id": 0, 00:05:01.147 "enable_zerocopy_send_server": true, 00:05:01.147 "enable_zerocopy_send_client": false, 00:05:01.147 "zerocopy_threshold": 0, 00:05:01.147 "tls_version": 0, 00:05:01.147 "enable_ktls": false 00:05:01.147 } 00:05:01.147 } 00:05:01.147 ] 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "vmd", 00:05:01.147 "config": [] 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "accel", 00:05:01.147 "config": [ 00:05:01.147 { 00:05:01.147 "method": "accel_set_options", 00:05:01.147 "params": { 00:05:01.147 "small_cache_size": 128, 00:05:01.147 "large_cache_size": 16, 00:05:01.147 "task_count": 2048, 00:05:01.147 "sequence_count": 2048, 00:05:01.147 "buf_count": 2048 00:05:01.147 } 00:05:01.147 } 00:05:01.147 ] 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "bdev", 00:05:01.147 "config": [ 00:05:01.147 { 00:05:01.147 "method": "bdev_set_options", 00:05:01.147 "params": { 00:05:01.147 "bdev_io_pool_size": 65535, 00:05:01.147 "bdev_io_cache_size": 256, 00:05:01.147 "bdev_auto_examine": true, 00:05:01.147 "iobuf_small_cache_size": 128, 00:05:01.147 "iobuf_large_cache_size": 16 00:05:01.147 } 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "method": "bdev_raid_set_options", 00:05:01.147 "params": { 00:05:01.147 "process_window_size_kb": 1024 00:05:01.147 } 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "method": "bdev_iscsi_set_options", 00:05:01.147 "params": { 00:05:01.147 "timeout_sec": 30 00:05:01.147 } 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "method": "bdev_nvme_set_options", 00:05:01.147 "params": { 00:05:01.147 "action_on_timeout": "none", 00:05:01.147 "timeout_us": 0, 00:05:01.147 "timeout_admin_us": 0, 00:05:01.147 "keep_alive_timeout_ms": 10000, 00:05:01.147 "arbitration_burst": 0, 00:05:01.147 "low_priority_weight": 0, 00:05:01.147 "medium_priority_weight": 0, 00:05:01.147 "high_priority_weight": 0, 00:05:01.147 "nvme_adminq_poll_period_us": 10000, 00:05:01.147 "nvme_ioq_poll_period_us": 0, 00:05:01.147 "io_queue_requests": 0, 00:05:01.147 "delay_cmd_submit": true, 00:05:01.147 "transport_retry_count": 4, 00:05:01.147 "bdev_retry_count": 3, 00:05:01.147 "transport_ack_timeout": 0, 00:05:01.147 "ctrlr_loss_timeout_sec": 0, 00:05:01.147 "reconnect_delay_sec": 0, 00:05:01.147 "fast_io_fail_timeout_sec": 0, 00:05:01.147 "disable_auto_failback": false, 00:05:01.147 "generate_uuids": false, 00:05:01.147 "transport_tos": 0, 00:05:01.147 "nvme_error_stat": false, 00:05:01.147 "rdma_srq_size": 0, 00:05:01.147 "io_path_stat": false, 00:05:01.147 "allow_accel_sequence": false, 00:05:01.147 "rdma_max_cq_size": 0, 00:05:01.147 "rdma_cm_event_timeout_ms": 0, 00:05:01.147 "dhchap_digests": [ 00:05:01.147 "sha256", 00:05:01.147 "sha384", 00:05:01.147 "sha512" 00:05:01.147 ], 00:05:01.147 "dhchap_dhgroups": [ 00:05:01.147 "null", 00:05:01.147 "ffdhe2048", 00:05:01.147 "ffdhe3072", 00:05:01.147 "ffdhe4096", 00:05:01.147 "ffdhe6144", 00:05:01.147 "ffdhe8192" 00:05:01.147 ] 00:05:01.147 } 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "method": "bdev_nvme_set_hotplug", 00:05:01.147 "params": { 00:05:01.147 "period_us": 100000, 00:05:01.147 "enable": false 00:05:01.147 } 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "method": "bdev_wait_for_examine" 00:05:01.147 } 00:05:01.147 ] 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "scsi", 00:05:01.147 "config": null 00:05:01.147 }, 00:05:01.147 { 00:05:01.147 "subsystem": "scheduler", 00:05:01.147 "config": [ 00:05:01.147 { 00:05:01.148 "method": "framework_set_scheduler", 00:05:01.148 "params": { 00:05:01.148 "name": "static" 00:05:01.148 } 00:05:01.148 } 00:05:01.148 ] 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "subsystem": "vhost_scsi", 00:05:01.148 "config": [] 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "subsystem": "vhost_blk", 00:05:01.148 "config": [] 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "subsystem": "ublk", 00:05:01.148 "config": [] 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "subsystem": "nbd", 00:05:01.148 "config": [] 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "subsystem": "nvmf", 00:05:01.148 "config": [ 00:05:01.148 { 00:05:01.148 "method": "nvmf_set_config", 00:05:01.148 "params": { 00:05:01.148 "discovery_filter": "match_any", 00:05:01.148 "admin_cmd_passthru": { 00:05:01.148 "identify_ctrlr": false 00:05:01.148 } 00:05:01.148 } 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "method": "nvmf_set_max_subsystems", 00:05:01.148 "params": { 00:05:01.148 "max_subsystems": 1024 00:05:01.148 } 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "method": "nvmf_set_crdt", 00:05:01.148 "params": { 00:05:01.148 "crdt1": 0, 00:05:01.148 "crdt2": 0, 00:05:01.148 "crdt3": 0 00:05:01.148 } 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "method": "nvmf_create_transport", 00:05:01.148 "params": { 00:05:01.148 "trtype": "TCP", 00:05:01.148 "max_queue_depth": 128, 00:05:01.148 "max_io_qpairs_per_ctrlr": 127, 00:05:01.148 "in_capsule_data_size": 4096, 00:05:01.148 "max_io_size": 131072, 00:05:01.148 "io_unit_size": 131072, 00:05:01.148 "max_aq_depth": 128, 00:05:01.148 "num_shared_buffers": 511, 00:05:01.148 "buf_cache_size": 4294967295, 00:05:01.148 "dif_insert_or_strip": false, 00:05:01.148 "zcopy": false, 00:05:01.148 "c2h_success": true, 00:05:01.148 "sock_priority": 0, 00:05:01.148 "abort_timeout_sec": 1, 00:05:01.148 "ack_timeout": 0, 00:05:01.148 "data_wr_pool_size": 0 00:05:01.148 } 00:05:01.148 } 00:05:01.148 ] 00:05:01.148 }, 00:05:01.148 { 00:05:01.148 "subsystem": "iscsi", 00:05:01.148 "config": [ 00:05:01.148 { 00:05:01.148 "method": "iscsi_set_options", 00:05:01.148 "params": { 00:05:01.148 "node_base": "iqn.2016-06.io.spdk", 00:05:01.148 "max_sessions": 128, 00:05:01.148 "max_connections_per_session": 2, 00:05:01.148 "max_queue_depth": 64, 00:05:01.148 "default_time2wait": 2, 00:05:01.148 "default_time2retain": 20, 00:05:01.148 "first_burst_length": 8192, 00:05:01.148 "immediate_data": true, 00:05:01.148 "allow_duplicated_isid": false, 00:05:01.148 "error_recovery_level": 0, 00:05:01.148 "nop_timeout": 60, 00:05:01.148 "nop_in_interval": 30, 00:05:01.148 "disable_chap": false, 00:05:01.148 "require_chap": false, 00:05:01.148 "mutual_chap": false, 00:05:01.148 "chap_group": 0, 00:05:01.148 "max_large_datain_per_connection": 64, 00:05:01.148 "max_r2t_per_connection": 4, 00:05:01.148 "pdu_pool_size": 36864, 00:05:01.148 "immediate_data_pool_size": 16384, 00:05:01.148 "data_out_pool_size": 2048 00:05:01.148 } 00:05:01.148 } 00:05:01.148 ] 00:05:01.148 } 00:05:01.148 ] 00:05:01.148 } 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2586629 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2586629 ']' 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2586629 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2586629 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2586629' 00:05:01.148 killing process with pid 2586629 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2586629 00:05:01.148 09:58:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2586629 00:05:01.410 09:58:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2586796 00:05:01.410 09:58:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:01.410 09:58:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2586796 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 2586796 ']' 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 2586796 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2586796 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2586796' 00:05:06.707 killing process with pid 2586796 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 2586796 00:05:06.707 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 2586796 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.708 00:05:06.708 real 0m6.004s 00:05:06.708 user 0m5.834s 00:05:06.708 sys 0m0.492s 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.708 ************************************ 00:05:06.708 END TEST skip_rpc_with_json 00:05:06.708 ************************************ 00:05:06.708 09:58:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.708 09:58:52 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:06.708 09:58:52 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:06.708 09:58:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.708 ************************************ 00:05:06.708 START TEST skip_rpc_with_delay 00:05:06.708 ************************************ 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.708 [2024-05-15 09:58:52.434317] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.708 [2024-05-15 09:58:52.434391] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:06.708 00:05:06.708 real 0m0.070s 00:05:06.708 user 0m0.045s 00:05:06.708 sys 0m0.024s 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:06.708 09:58:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.708 ************************************ 00:05:06.708 END TEST skip_rpc_with_delay 00:05:06.708 ************************************ 00:05:06.708 09:58:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.708 09:58:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.708 09:58:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.708 09:58:52 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:06.708 09:58:52 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:06.708 09:58:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.970 ************************************ 00:05:06.970 START TEST exit_on_failed_rpc_init 00:05:06.970 ************************************ 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2587856 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2587856 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 2587856 ']' 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:06.970 09:58:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.970 [2024-05-15 09:58:52.584122] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:06.970 [2024-05-15 09:58:52.584182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587856 ] 00:05:06.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.970 [2024-05-15 09:58:52.647025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.970 [2024-05-15 09:58:52.686364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.913 [2024-05-15 09:58:53.422485] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:07.913 [2024-05-15 09:58:53.422537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588188 ] 00:05:07.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.913 [2024-05-15 09:58:53.498714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.913 [2024-05-15 09:58:53.529345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.913 [2024-05-15 09:58:53.529407] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:07.913 [2024-05-15 09:58:53.529417] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:07.913 [2024-05-15 09:58:53.529423] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2587856 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 2587856 ']' 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 2587856 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2587856 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2587856' 00:05:07.913 killing process with pid 2587856 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 2587856 00:05:07.913 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 2587856 00:05:08.173 00:05:08.173 real 0m1.293s 00:05:08.173 user 0m1.497s 00:05:08.173 sys 0m0.361s 00:05:08.173 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:08.173 09:58:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.173 ************************************ 00:05:08.173 END TEST exit_on_failed_rpc_init 00:05:08.173 ************************************ 00:05:08.173 09:58:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.173 00:05:08.173 real 0m13.032s 00:05:08.173 user 0m12.583s 00:05:08.173 sys 0m1.378s 00:05:08.173 09:58:53 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:08.173 09:58:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.173 ************************************ 00:05:08.173 END TEST skip_rpc 00:05:08.173 ************************************ 00:05:08.173 09:58:53 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:08.173 09:58:53 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:08.173 09:58:53 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:08.173 09:58:53 -- common/autotest_common.sh@10 -- # set +x 00:05:08.173 ************************************ 00:05:08.173 START TEST rpc_client 00:05:08.173 ************************************ 00:05:08.173 09:58:53 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:08.435 * Looking for test storage... 00:05:08.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:08.435 09:58:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:08.435 OK 00:05:08.435 09:58:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:08.435 00:05:08.435 real 0m0.121s 00:05:08.435 user 0m0.053s 00:05:08.435 sys 0m0.075s 00:05:08.435 09:58:54 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:08.435 09:58:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:08.435 ************************************ 00:05:08.435 END TEST rpc_client 00:05:08.435 ************************************ 00:05:08.435 09:58:54 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:08.435 09:58:54 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:08.435 09:58:54 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:08.435 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:05:08.435 ************************************ 00:05:08.435 START TEST json_config 00:05:08.435 ************************************ 00:05:08.435 09:58:54 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:08.435 09:58:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.435 09:58:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.696 09:58:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.696 09:58:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.696 09:58:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.696 09:58:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.696 09:58:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.696 09:58:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.696 09:58:54 json_config -- paths/export.sh@5 -- # export PATH 00:05:08.696 09:58:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@47 -- # : 0 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:08.696 09:58:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:08.696 INFO: JSON configuration test init 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:08.696 09:58:54 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:08.696 09:58:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:08.696 09:58:54 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:08.696 09:58:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.696 09:58:54 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:08.696 09:58:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.697 09:58:54 json_config -- json_config/common.sh@10 -- # shift 00:05:08.697 09:58:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.697 09:58:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.697 09:58:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.697 09:58:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.697 09:58:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.697 09:58:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2588313 00:05:08.697 09:58:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.697 Waiting for target to run... 00:05:08.697 09:58:54 json_config -- json_config/common.sh@25 -- # waitforlisten 2588313 /var/tmp/spdk_tgt.sock 00:05:08.697 09:58:54 json_config -- common/autotest_common.sh@828 -- # '[' -z 2588313 ']' 00:05:08.697 09:58:54 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.697 09:58:54 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:08.697 09:58:54 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.697 09:58:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:08.697 09:58:54 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:08.697 09:58:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.697 [2024-05-15 09:58:54.321393] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:08.697 [2024-05-15 09:58:54.321484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588313 ] 00:05:08.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.958 [2024-05-15 09:58:54.574507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.958 [2024-05-15 09:58:54.591916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.530 09:58:55 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:09.530 09:58:55 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:09.530 09:58:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.530 00:05:09.530 09:58:55 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:09.530 09:58:55 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:09.530 09:58:55 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:09.530 09:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.530 09:58:55 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:09.530 09:58:55 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:09.530 09:58:55 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:09.530 09:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:09.530 09:58:55 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:09.530 09:58:55 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:09.530 09:58:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.103 09:58:55 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:10.103 09:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.103 09:58:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:10.103 09:58:55 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:10.103 09:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:10.103 09:58:55 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:10.103 09:58:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:10.103 09:58:55 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.103 09:58:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.375 MallocForNvmf0 00:05:10.375 09:58:56 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.375 09:58:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.638 MallocForNvmf1 00:05:10.638 09:58:56 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.638 09:58:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.638 [2024-05-15 09:58:56.340404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.638 09:58:56 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.638 09:58:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.899 09:58:56 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.899 09:58:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.161 09:58:56 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.161 09:58:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.161 09:58:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.161 09:58:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.423 [2024-05-15 09:58:57.010341] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:11.423 [2024-05-15 09:58:57.010805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.423 09:58:57 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:11.423 09:58:57 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:11.423 09:58:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.423 09:58:57 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:11.423 09:58:57 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:11.423 09:58:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.423 09:58:57 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:11.423 09:58:57 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.423 09:58:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.684 MallocBdevForConfigChangeCheck 00:05:11.684 09:58:57 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:11.684 09:58:57 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:11.684 09:58:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.684 09:58:57 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:11.684 09:58:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:11.945 09:58:57 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:11.945 INFO: shutting down applications... 00:05:11.945 09:58:57 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:11.945 09:58:57 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:11.945 09:58:57 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:11.945 09:58:57 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:12.517 Calling clear_iscsi_subsystem 00:05:12.517 Calling clear_nvmf_subsystem 00:05:12.517 Calling clear_nbd_subsystem 00:05:12.517 Calling clear_ublk_subsystem 00:05:12.517 Calling clear_vhost_blk_subsystem 00:05:12.517 Calling clear_vhost_scsi_subsystem 00:05:12.517 Calling clear_bdev_subsystem 00:05:12.517 09:58:58 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:12.517 09:58:58 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:12.517 09:58:58 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:12.517 09:58:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.517 09:58:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:12.517 09:58:58 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:12.778 09:58:58 json_config -- json_config/json_config.sh@345 -- # break 00:05:12.778 09:58:58 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:12.778 09:58:58 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:12.778 09:58:58 json_config -- json_config/common.sh@31 -- # local app=target 00:05:12.778 09:58:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:12.778 09:58:58 json_config -- json_config/common.sh@35 -- # [[ -n 2588313 ]] 00:05:12.778 09:58:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2588313 00:05:12.778 [2024-05-15 09:58:58.343812] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:12.778 09:58:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:12.778 09:58:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.778 09:58:58 json_config -- json_config/common.sh@41 -- # kill -0 2588313 00:05:12.778 09:58:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.352 09:58:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.352 09:58:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.352 09:58:58 json_config -- json_config/common.sh@41 -- # kill -0 2588313 00:05:13.352 09:58:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.352 09:58:58 json_config -- json_config/common.sh@43 -- # break 00:05:13.352 09:58:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.352 09:58:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.352 SPDK target shutdown done 00:05:13.352 09:58:58 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:13.352 INFO: relaunching applications... 00:05:13.352 09:58:58 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.352 09:58:58 json_config -- json_config/common.sh@9 -- # local app=target 00:05:13.352 09:58:58 json_config -- json_config/common.sh@10 -- # shift 00:05:13.352 09:58:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.352 09:58:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.352 09:58:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.352 09:58:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.352 09:58:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.352 09:58:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2589438 00:05:13.352 09:58:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.352 Waiting for target to run... 00:05:13.352 09:58:58 json_config -- json_config/common.sh@25 -- # waitforlisten 2589438 /var/tmp/spdk_tgt.sock 00:05:13.352 09:58:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.352 09:58:58 json_config -- common/autotest_common.sh@828 -- # '[' -z 2589438 ']' 00:05:13.352 09:58:58 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.352 09:58:58 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:13.352 09:58:58 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.352 09:58:58 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:13.352 09:58:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.352 [2024-05-15 09:58:58.905091] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:13.352 [2024-05-15 09:58:58.905151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589438 ] 00:05:13.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.613 [2024-05-15 09:58:59.176258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.613 [2024-05-15 09:58:59.195046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.875 [2024-05-15 09:58:59.658296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:14.137 [2024-05-15 09:58:59.690271] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:14.137 [2024-05-15 09:58:59.690676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:14.137 09:58:59 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:14.137 09:58:59 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:14.137 09:58:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:14.137 00:05:14.137 09:58:59 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:14.137 09:58:59 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:14.137 INFO: Checking if target configuration is the same... 00:05:14.137 09:58:59 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.137 09:58:59 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:14.137 09:58:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.137 + '[' 2 -ne 2 ']' 00:05:14.137 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.137 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.137 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.137 +++ basename /dev/fd/62 00:05:14.137 ++ mktemp /tmp/62.XXX 00:05:14.137 + tmp_file_1=/tmp/62.eJc 00:05:14.137 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.137 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.137 + tmp_file_2=/tmp/spdk_tgt_config.json.SRX 00:05:14.137 + ret=0 00:05:14.137 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.402 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.402 + diff -u /tmp/62.eJc /tmp/spdk_tgt_config.json.SRX 00:05:14.402 + echo 'INFO: JSON config files are the same' 00:05:14.402 INFO: JSON config files are the same 00:05:14.402 + rm /tmp/62.eJc /tmp/spdk_tgt_config.json.SRX 00:05:14.402 + exit 0 00:05:14.402 09:59:00 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:14.402 09:59:00 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:14.402 INFO: changing configuration and checking if this can be detected... 00:05:14.402 09:59:00 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.402 09:59:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:14.724 09:59:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:14.724 09:59:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:14.724 09:59:00 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.724 + '[' 2 -ne 2 ']' 00:05:14.724 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:14.724 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:14.724 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:14.724 +++ basename /dev/fd/62 00:05:14.724 ++ mktemp /tmp/62.XXX 00:05:14.724 + tmp_file_1=/tmp/62.2I1 00:05:14.724 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.724 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:14.724 + tmp_file_2=/tmp/spdk_tgt_config.json.mWf 00:05:14.724 + ret=0 00:05:14.724 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.986 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:14.986 + diff -u /tmp/62.2I1 /tmp/spdk_tgt_config.json.mWf 00:05:14.986 + ret=1 00:05:14.986 + echo '=== Start of file: /tmp/62.2I1 ===' 00:05:14.986 + cat /tmp/62.2I1 00:05:14.986 + echo '=== End of file: /tmp/62.2I1 ===' 00:05:14.986 + echo '' 00:05:14.986 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mWf ===' 00:05:14.986 + cat /tmp/spdk_tgt_config.json.mWf 00:05:14.986 + echo '=== End of file: /tmp/spdk_tgt_config.json.mWf ===' 00:05:14.986 + echo '' 00:05:14.986 + rm /tmp/62.2I1 /tmp/spdk_tgt_config.json.mWf 00:05:14.986 + exit 1 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:14.986 INFO: configuration change detected. 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 2589438 ]] 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.986 09:59:00 json_config -- json_config/json_config.sh@323 -- # killprocess 2589438 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@947 -- # '[' -z 2589438 ']' 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@951 -- # kill -0 2589438 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@952 -- # uname 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2589438 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2589438' 00:05:14.986 killing process with pid 2589438 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@966 -- # kill 2589438 00:05:14.986 [2024-05-15 09:59:00.764542] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:14.986 09:59:00 json_config -- common/autotest_common.sh@971 -- # wait 2589438 00:05:15.248 09:59:01 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.248 09:59:01 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:15.248 09:59:01 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:15.248 09:59:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.511 09:59:01 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:15.511 09:59:01 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:15.511 INFO: Success 00:05:15.511 00:05:15.511 real 0m6.935s 00:05:15.511 user 0m8.530s 00:05:15.511 sys 0m1.716s 00:05:15.511 09:59:01 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:15.511 09:59:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.511 ************************************ 00:05:15.511 END TEST json_config 00:05:15.511 ************************************ 00:05:15.511 09:59:01 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.511 09:59:01 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:15.511 09:59:01 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:15.511 09:59:01 -- common/autotest_common.sh@10 -- # set +x 00:05:15.511 ************************************ 00:05:15.511 START TEST json_config_extra_key 00:05:15.511 ************************************ 00:05:15.511 09:59:01 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:15.511 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.511 09:59:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.511 09:59:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.511 09:59:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.511 09:59:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.511 09:59:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.511 09:59:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.511 09:59:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:15.511 09:59:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:15.511 09:59:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:15.512 INFO: launching applications... 00:05:15.512 09:59:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2589980 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.512 Waiting for target to run... 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2589980 /var/tmp/spdk_tgt.sock 00:05:15.512 09:59:01 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 2589980 ']' 00:05:15.512 09:59:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.512 09:59:01 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.512 09:59:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:15.512 09:59:01 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.512 09:59:01 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:15.512 09:59:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.774 [2024-05-15 09:59:01.323278] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:15.774 [2024-05-15 09:59:01.323363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589980 ] 00:05:15.774 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.036 [2024-05-15 09:59:01.591193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.036 [2024-05-15 09:59:01.607745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.610 09:59:02 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:16.610 09:59:02 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:16.610 00:05:16.610 09:59:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:16.610 INFO: shutting down applications... 00:05:16.610 09:59:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2589980 ]] 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2589980 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2589980 00:05:16.610 09:59:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2589980 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.873 09:59:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.873 SPDK target shutdown done 00:05:16.873 09:59:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:16.873 Success 00:05:16.873 00:05:16.873 real 0m1.443s 00:05:16.873 user 0m1.076s 00:05:16.873 sys 0m0.371s 00:05:16.873 09:59:02 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:16.873 09:59:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.873 ************************************ 00:05:16.873 END TEST json_config_extra_key 00:05:16.873 ************************************ 00:05:16.873 09:59:02 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.873 09:59:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:16.873 09:59:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:16.873 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:05:17.135 ************************************ 00:05:17.135 START TEST alias_rpc 00:05:17.135 ************************************ 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.135 * Looking for test storage... 00:05:17.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:17.135 09:59:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.135 09:59:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2590292 00:05:17.135 09:59:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2590292 00:05:17.135 09:59:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 2590292 ']' 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:17.135 09:59:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.135 [2024-05-15 09:59:02.837889] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:17.135 [2024-05-15 09:59:02.837967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590292 ] 00:05:17.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.135 [2024-05-15 09:59:02.903264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.398 [2024-05-15 09:59:02.942977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.971 09:59:03 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:17.971 09:59:03 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:17.971 09:59:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:18.233 09:59:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2590292 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 2590292 ']' 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 2590292 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2590292 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2590292' 00:05:18.233 killing process with pid 2590292 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@966 -- # kill 2590292 00:05:18.233 09:59:03 alias_rpc -- common/autotest_common.sh@971 -- # wait 2590292 00:05:18.496 00:05:18.496 real 0m1.368s 00:05:18.496 user 0m1.535s 00:05:18.496 sys 0m0.357s 00:05:18.496 09:59:04 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:18.497 09:59:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.497 ************************************ 00:05:18.497 END TEST alias_rpc 00:05:18.497 ************************************ 00:05:18.497 09:59:04 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:18.497 09:59:04 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.497 09:59:04 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:18.497 09:59:04 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.497 09:59:04 -- common/autotest_common.sh@10 -- # set +x 00:05:18.497 ************************************ 00:05:18.497 START TEST spdkcli_tcp 00:05:18.497 ************************************ 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.497 * Looking for test storage... 00:05:18.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2590668 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2590668 00:05:18.497 09:59:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 2590668 ']' 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:18.497 09:59:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.497 [2024-05-15 09:59:04.289735] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:18.497 [2024-05-15 09:59:04.289810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590668 ] 00:05:18.759 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.759 [2024-05-15 09:59:04.356369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.759 [2024-05-15 09:59:04.395932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.759 [2024-05-15 09:59:04.395935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.333 09:59:05 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:19.333 09:59:05 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:05:19.333 09:59:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2590982 00:05:19.333 09:59:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.333 09:59:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.596 [ 00:05:19.596 "bdev_malloc_delete", 00:05:19.596 "bdev_malloc_create", 00:05:19.596 "bdev_null_resize", 00:05:19.596 "bdev_null_delete", 00:05:19.596 "bdev_null_create", 00:05:19.596 "bdev_nvme_cuse_unregister", 00:05:19.596 "bdev_nvme_cuse_register", 00:05:19.596 "bdev_opal_new_user", 00:05:19.596 "bdev_opal_set_lock_state", 00:05:19.596 "bdev_opal_delete", 00:05:19.596 "bdev_opal_get_info", 00:05:19.596 "bdev_opal_create", 00:05:19.596 "bdev_nvme_opal_revert", 00:05:19.596 "bdev_nvme_opal_init", 00:05:19.596 "bdev_nvme_send_cmd", 00:05:19.596 "bdev_nvme_get_path_iostat", 00:05:19.596 "bdev_nvme_get_mdns_discovery_info", 00:05:19.596 "bdev_nvme_stop_mdns_discovery", 00:05:19.596 "bdev_nvme_start_mdns_discovery", 00:05:19.596 "bdev_nvme_set_multipath_policy", 00:05:19.596 "bdev_nvme_set_preferred_path", 00:05:19.596 "bdev_nvme_get_io_paths", 00:05:19.596 "bdev_nvme_remove_error_injection", 00:05:19.596 "bdev_nvme_add_error_injection", 00:05:19.596 "bdev_nvme_get_discovery_info", 00:05:19.596 "bdev_nvme_stop_discovery", 00:05:19.596 "bdev_nvme_start_discovery", 00:05:19.596 "bdev_nvme_get_controller_health_info", 00:05:19.596 "bdev_nvme_disable_controller", 00:05:19.596 "bdev_nvme_enable_controller", 00:05:19.596 "bdev_nvme_reset_controller", 00:05:19.596 "bdev_nvme_get_transport_statistics", 00:05:19.596 "bdev_nvme_apply_firmware", 00:05:19.596 "bdev_nvme_detach_controller", 00:05:19.596 "bdev_nvme_get_controllers", 00:05:19.596 "bdev_nvme_attach_controller", 00:05:19.596 "bdev_nvme_set_hotplug", 00:05:19.596 "bdev_nvme_set_options", 00:05:19.596 "bdev_passthru_delete", 00:05:19.596 "bdev_passthru_create", 00:05:19.596 "bdev_lvol_check_shallow_copy", 00:05:19.596 "bdev_lvol_start_shallow_copy", 00:05:19.596 "bdev_lvol_grow_lvstore", 00:05:19.596 "bdev_lvol_get_lvols", 00:05:19.596 "bdev_lvol_get_lvstores", 00:05:19.596 "bdev_lvol_delete", 00:05:19.596 "bdev_lvol_set_read_only", 00:05:19.596 "bdev_lvol_resize", 00:05:19.596 "bdev_lvol_decouple_parent", 00:05:19.596 "bdev_lvol_inflate", 00:05:19.596 "bdev_lvol_rename", 00:05:19.596 "bdev_lvol_clone_bdev", 00:05:19.596 "bdev_lvol_clone", 00:05:19.596 "bdev_lvol_snapshot", 00:05:19.596 "bdev_lvol_create", 00:05:19.596 "bdev_lvol_delete_lvstore", 00:05:19.596 "bdev_lvol_rename_lvstore", 00:05:19.596 "bdev_lvol_create_lvstore", 00:05:19.596 "bdev_raid_set_options", 00:05:19.596 "bdev_raid_remove_base_bdev", 00:05:19.596 "bdev_raid_add_base_bdev", 00:05:19.596 "bdev_raid_delete", 00:05:19.596 "bdev_raid_create", 00:05:19.596 "bdev_raid_get_bdevs", 00:05:19.596 "bdev_error_inject_error", 00:05:19.596 "bdev_error_delete", 00:05:19.596 "bdev_error_create", 00:05:19.596 "bdev_split_delete", 00:05:19.596 "bdev_split_create", 00:05:19.596 "bdev_delay_delete", 00:05:19.596 "bdev_delay_create", 00:05:19.596 "bdev_delay_update_latency", 00:05:19.596 "bdev_zone_block_delete", 00:05:19.596 "bdev_zone_block_create", 00:05:19.596 "blobfs_create", 00:05:19.596 "blobfs_detect", 00:05:19.596 "blobfs_set_cache_size", 00:05:19.596 "bdev_aio_delete", 00:05:19.596 "bdev_aio_rescan", 00:05:19.596 "bdev_aio_create", 00:05:19.596 "bdev_ftl_set_property", 00:05:19.596 "bdev_ftl_get_properties", 00:05:19.596 "bdev_ftl_get_stats", 00:05:19.596 "bdev_ftl_unmap", 00:05:19.596 "bdev_ftl_unload", 00:05:19.596 "bdev_ftl_delete", 00:05:19.596 "bdev_ftl_load", 00:05:19.596 "bdev_ftl_create", 00:05:19.596 "bdev_virtio_attach_controller", 00:05:19.596 "bdev_virtio_scsi_get_devices", 00:05:19.596 "bdev_virtio_detach_controller", 00:05:19.596 "bdev_virtio_blk_set_hotplug", 00:05:19.596 "bdev_iscsi_delete", 00:05:19.596 "bdev_iscsi_create", 00:05:19.596 "bdev_iscsi_set_options", 00:05:19.596 "accel_error_inject_error", 00:05:19.596 "ioat_scan_accel_module", 00:05:19.596 "dsa_scan_accel_module", 00:05:19.596 "iaa_scan_accel_module", 00:05:19.596 "vfu_virtio_create_scsi_endpoint", 00:05:19.596 "vfu_virtio_scsi_remove_target", 00:05:19.596 "vfu_virtio_scsi_add_target", 00:05:19.596 "vfu_virtio_create_blk_endpoint", 00:05:19.596 "vfu_virtio_delete_endpoint", 00:05:19.596 "keyring_file_remove_key", 00:05:19.596 "keyring_file_add_key", 00:05:19.596 "iscsi_get_histogram", 00:05:19.596 "iscsi_enable_histogram", 00:05:19.596 "iscsi_set_options", 00:05:19.596 "iscsi_get_auth_groups", 00:05:19.596 "iscsi_auth_group_remove_secret", 00:05:19.596 "iscsi_auth_group_add_secret", 00:05:19.596 "iscsi_delete_auth_group", 00:05:19.596 "iscsi_create_auth_group", 00:05:19.596 "iscsi_set_discovery_auth", 00:05:19.596 "iscsi_get_options", 00:05:19.596 "iscsi_target_node_request_logout", 00:05:19.596 "iscsi_target_node_set_redirect", 00:05:19.596 "iscsi_target_node_set_auth", 00:05:19.596 "iscsi_target_node_add_lun", 00:05:19.596 "iscsi_get_stats", 00:05:19.596 "iscsi_get_connections", 00:05:19.596 "iscsi_portal_group_set_auth", 00:05:19.596 "iscsi_start_portal_group", 00:05:19.596 "iscsi_delete_portal_group", 00:05:19.596 "iscsi_create_portal_group", 00:05:19.596 "iscsi_get_portal_groups", 00:05:19.596 "iscsi_delete_target_node", 00:05:19.596 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.596 "iscsi_target_node_add_pg_ig_maps", 00:05:19.596 "iscsi_create_target_node", 00:05:19.596 "iscsi_get_target_nodes", 00:05:19.597 "iscsi_delete_initiator_group", 00:05:19.597 "iscsi_initiator_group_remove_initiators", 00:05:19.597 "iscsi_initiator_group_add_initiators", 00:05:19.597 "iscsi_create_initiator_group", 00:05:19.597 "iscsi_get_initiator_groups", 00:05:19.597 "nvmf_set_crdt", 00:05:19.597 "nvmf_set_config", 00:05:19.597 "nvmf_set_max_subsystems", 00:05:19.597 "nvmf_stop_mdns_prr", 00:05:19.597 "nvmf_publish_mdns_prr", 00:05:19.597 "nvmf_subsystem_get_listeners", 00:05:19.597 "nvmf_subsystem_get_qpairs", 00:05:19.597 "nvmf_subsystem_get_controllers", 00:05:19.597 "nvmf_get_stats", 00:05:19.597 "nvmf_get_transports", 00:05:19.597 "nvmf_create_transport", 00:05:19.597 "nvmf_get_targets", 00:05:19.597 "nvmf_delete_target", 00:05:19.597 "nvmf_create_target", 00:05:19.597 "nvmf_subsystem_allow_any_host", 00:05:19.597 "nvmf_subsystem_remove_host", 00:05:19.597 "nvmf_subsystem_add_host", 00:05:19.597 "nvmf_ns_remove_host", 00:05:19.597 "nvmf_ns_add_host", 00:05:19.597 "nvmf_subsystem_remove_ns", 00:05:19.597 "nvmf_subsystem_add_ns", 00:05:19.597 "nvmf_subsystem_listener_set_ana_state", 00:05:19.597 "nvmf_discovery_get_referrals", 00:05:19.597 "nvmf_discovery_remove_referral", 00:05:19.597 "nvmf_discovery_add_referral", 00:05:19.597 "nvmf_subsystem_remove_listener", 00:05:19.597 "nvmf_subsystem_add_listener", 00:05:19.597 "nvmf_delete_subsystem", 00:05:19.597 "nvmf_create_subsystem", 00:05:19.597 "nvmf_get_subsystems", 00:05:19.597 "env_dpdk_get_mem_stats", 00:05:19.597 "nbd_get_disks", 00:05:19.597 "nbd_stop_disk", 00:05:19.597 "nbd_start_disk", 00:05:19.597 "ublk_recover_disk", 00:05:19.597 "ublk_get_disks", 00:05:19.597 "ublk_stop_disk", 00:05:19.597 "ublk_start_disk", 00:05:19.597 "ublk_destroy_target", 00:05:19.597 "ublk_create_target", 00:05:19.597 "virtio_blk_create_transport", 00:05:19.597 "virtio_blk_get_transports", 00:05:19.597 "vhost_controller_set_coalescing", 00:05:19.597 "vhost_get_controllers", 00:05:19.597 "vhost_delete_controller", 00:05:19.597 "vhost_create_blk_controller", 00:05:19.597 "vhost_scsi_controller_remove_target", 00:05:19.597 "vhost_scsi_controller_add_target", 00:05:19.597 "vhost_start_scsi_controller", 00:05:19.597 "vhost_create_scsi_controller", 00:05:19.597 "thread_set_cpumask", 00:05:19.597 "framework_get_scheduler", 00:05:19.597 "framework_set_scheduler", 00:05:19.597 "framework_get_reactors", 00:05:19.597 "thread_get_io_channels", 00:05:19.597 "thread_get_pollers", 00:05:19.597 "thread_get_stats", 00:05:19.597 "framework_monitor_context_switch", 00:05:19.597 "spdk_kill_instance", 00:05:19.597 "log_enable_timestamps", 00:05:19.597 "log_get_flags", 00:05:19.597 "log_clear_flag", 00:05:19.597 "log_set_flag", 00:05:19.597 "log_get_level", 00:05:19.597 "log_set_level", 00:05:19.597 "log_get_print_level", 00:05:19.597 "log_set_print_level", 00:05:19.597 "framework_enable_cpumask_locks", 00:05:19.597 "framework_disable_cpumask_locks", 00:05:19.597 "framework_wait_init", 00:05:19.597 "framework_start_init", 00:05:19.597 "scsi_get_devices", 00:05:19.597 "bdev_get_histogram", 00:05:19.597 "bdev_enable_histogram", 00:05:19.597 "bdev_set_qos_limit", 00:05:19.597 "bdev_set_qd_sampling_period", 00:05:19.597 "bdev_get_bdevs", 00:05:19.597 "bdev_reset_iostat", 00:05:19.597 "bdev_get_iostat", 00:05:19.597 "bdev_examine", 00:05:19.597 "bdev_wait_for_examine", 00:05:19.597 "bdev_set_options", 00:05:19.597 "notify_get_notifications", 00:05:19.597 "notify_get_types", 00:05:19.597 "accel_get_stats", 00:05:19.597 "accel_set_options", 00:05:19.597 "accel_set_driver", 00:05:19.597 "accel_crypto_key_destroy", 00:05:19.597 "accel_crypto_keys_get", 00:05:19.597 "accel_crypto_key_create", 00:05:19.597 "accel_assign_opc", 00:05:19.597 "accel_get_module_info", 00:05:19.597 "accel_get_opc_assignments", 00:05:19.597 "vmd_rescan", 00:05:19.597 "vmd_remove_device", 00:05:19.597 "vmd_enable", 00:05:19.597 "sock_get_default_impl", 00:05:19.597 "sock_set_default_impl", 00:05:19.597 "sock_impl_set_options", 00:05:19.597 "sock_impl_get_options", 00:05:19.597 "iobuf_get_stats", 00:05:19.597 "iobuf_set_options", 00:05:19.597 "keyring_get_keys", 00:05:19.597 "framework_get_pci_devices", 00:05:19.597 "framework_get_config", 00:05:19.597 "framework_get_subsystems", 00:05:19.597 "vfu_tgt_set_base_path", 00:05:19.597 "trace_get_info", 00:05:19.597 "trace_get_tpoint_group_mask", 00:05:19.597 "trace_disable_tpoint_group", 00:05:19.597 "trace_enable_tpoint_group", 00:05:19.597 "trace_clear_tpoint_mask", 00:05:19.597 "trace_set_tpoint_mask", 00:05:19.597 "spdk_get_version", 00:05:19.597 "rpc_get_methods" 00:05:19.597 ] 00:05:19.597 09:59:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.597 09:59:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:19.597 09:59:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2590668 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 2590668 ']' 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 2590668 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2590668 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2590668' 00:05:19.597 killing process with pid 2590668 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 2590668 00:05:19.597 09:59:05 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 2590668 00:05:19.859 00:05:19.859 real 0m1.388s 00:05:19.859 user 0m2.589s 00:05:19.859 sys 0m0.417s 00:05:19.860 09:59:05 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:19.860 09:59:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.860 ************************************ 00:05:19.860 END TEST spdkcli_tcp 00:05:19.860 ************************************ 00:05:19.860 09:59:05 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.860 09:59:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:19.860 09:59:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:19.860 09:59:05 -- common/autotest_common.sh@10 -- # set +x 00:05:19.860 ************************************ 00:05:19.860 START TEST dpdk_mem_utility 00:05:19.860 ************************************ 00:05:19.860 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.122 * Looking for test storage... 00:05:20.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:20.122 09:59:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.122 09:59:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2591076 00:05:20.122 09:59:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2591076 00:05:20.122 09:59:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.122 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 2591076 ']' 00:05:20.122 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.122 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:20.122 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.122 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:20.122 09:59:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.122 [2024-05-15 09:59:05.750128] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:20.122 [2024-05-15 09:59:05.750179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591076 ] 00:05:20.122 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.122 [2024-05-15 09:59:05.808820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.122 [2024-05-15 09:59:05.839931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.068 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:21.068 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:05:21.068 09:59:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:21.068 09:59:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:21.069 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:21.069 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.069 { 00:05:21.069 "filename": "/tmp/spdk_mem_dump.txt" 00:05:21.069 } 00:05:21.069 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:21.069 09:59:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.069 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:21.069 1 heaps totaling size 814.000000 MiB 00:05:21.069 size: 814.000000 MiB heap id: 0 00:05:21.069 end heaps---------- 00:05:21.069 8 mempools totaling size 598.116089 MiB 00:05:21.069 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:21.069 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:21.069 size: 84.521057 MiB name: bdev_io_2591076 00:05:21.069 size: 51.011292 MiB name: evtpool_2591076 00:05:21.069 size: 50.003479 MiB name: msgpool_2591076 00:05:21.069 size: 21.763794 MiB name: PDU_Pool 00:05:21.069 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:21.069 size: 0.026123 MiB name: Session_Pool 00:05:21.069 end mempools------- 00:05:21.069 6 memzones totaling size 4.142822 MiB 00:05:21.069 size: 1.000366 MiB name: RG_ring_0_2591076 00:05:21.069 size: 1.000366 MiB name: RG_ring_1_2591076 00:05:21.069 size: 1.000366 MiB name: RG_ring_4_2591076 00:05:21.069 size: 1.000366 MiB name: RG_ring_5_2591076 00:05:21.069 size: 0.125366 MiB name: RG_ring_2_2591076 00:05:21.069 size: 0.015991 MiB name: RG_ring_3_2591076 00:05:21.069 end memzones------- 00:05:21.069 09:59:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:21.069 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:21.069 list of free elements. size: 12.519348 MiB 00:05:21.069 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:21.069 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:21.069 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:21.069 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:21.069 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:21.069 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:21.069 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:21.069 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:21.069 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:21.069 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:21.069 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:21.069 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:21.069 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:21.069 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:21.069 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:21.069 list of standard malloc elements. size: 199.218079 MiB 00:05:21.069 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:21.069 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:21.069 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:21.069 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:21.069 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:21.069 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:21.069 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:21.069 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:21.069 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:21.069 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:21.069 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:21.069 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:21.069 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:21.069 list of memzone associated elements. size: 602.262573 MiB 00:05:21.069 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:21.069 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:21.069 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:21.069 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:21.069 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:21.069 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2591076_0 00:05:21.069 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:21.069 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2591076_0 00:05:21.069 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:21.069 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2591076_0 00:05:21.069 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:21.069 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:21.069 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:21.069 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:21.069 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:21.069 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2591076 00:05:21.069 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:21.069 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2591076 00:05:21.069 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:21.069 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2591076 00:05:21.069 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:21.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:21.069 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:21.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:21.069 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:21.069 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:21.069 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:21.069 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:21.069 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:21.069 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2591076 00:05:21.069 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:21.069 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2591076 00:05:21.069 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:21.069 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2591076 00:05:21.069 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:21.069 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2591076 00:05:21.069 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:21.069 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2591076 00:05:21.069 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:21.069 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:21.069 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:21.069 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:21.069 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:21.069 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:21.069 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:21.069 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2591076 00:05:21.069 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:21.069 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:21.069 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:21.069 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:21.070 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:21.070 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2591076 00:05:21.070 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:21.070 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:21.070 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:21.070 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2591076 00:05:21.070 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:21.070 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2591076 00:05:21.070 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:21.070 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:21.070 09:59:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:21.070 09:59:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2591076 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 2591076 ']' 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 2591076 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2591076 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2591076' 00:05:21.070 killing process with pid 2591076 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 2591076 00:05:21.070 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 2591076 00:05:21.332 00:05:21.332 real 0m1.265s 00:05:21.332 user 0m1.344s 00:05:21.332 sys 0m0.360s 00:05:21.332 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:21.332 09:59:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.332 ************************************ 00:05:21.332 END TEST dpdk_mem_utility 00:05:21.332 ************************************ 00:05:21.332 09:59:06 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.332 09:59:06 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:21.332 09:59:06 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:21.332 09:59:06 -- common/autotest_common.sh@10 -- # set +x 00:05:21.332 ************************************ 00:05:21.332 START TEST event 00:05:21.332 ************************************ 00:05:21.332 09:59:06 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.332 * Looking for test storage... 00:05:21.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:21.332 09:59:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:21.332 09:59:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.333 09:59:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.333 09:59:07 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:05:21.333 09:59:07 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:21.333 09:59:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.333 ************************************ 00:05:21.333 START TEST event_perf 00:05:21.333 ************************************ 00:05:21.333 09:59:07 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.333 Running I/O for 1 seconds...[2024-05-15 09:59:07.101631] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:21.333 [2024-05-15 09:59:07.101712] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591463 ] 00:05:21.594 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.595 [2024-05-15 09:59:07.167746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.595 [2024-05-15 09:59:07.205800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.595 [2024-05-15 09:59:07.205919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.595 [2024-05-15 09:59:07.206078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.595 Running I/O for 1 seconds...[2024-05-15 09:59:07.206078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.539 00:05:22.539 lcore 0: 164665 00:05:22.539 lcore 1: 164667 00:05:22.539 lcore 2: 164667 00:05:22.539 lcore 3: 164671 00:05:22.539 done. 00:05:22.539 00:05:22.539 real 0m1.165s 00:05:22.539 user 0m4.084s 00:05:22.539 sys 0m0.078s 00:05:22.539 09:59:08 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.539 09:59:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.539 ************************************ 00:05:22.539 END TEST event_perf 00:05:22.539 ************************************ 00:05:22.539 09:59:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.539 09:59:08 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:22.539 09:59:08 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.539 09:59:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.539 ************************************ 00:05:22.539 START TEST event_reactor 00:05:22.539 ************************************ 00:05:22.539 09:59:08 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.801 [2024-05-15 09:59:08.349442] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:22.801 [2024-05-15 09:59:08.349539] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591765 ] 00:05:22.801 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.801 [2024-05-15 09:59:08.414126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.801 [2024-05-15 09:59:08.449124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.747 test_start 00:05:23.748 oneshot 00:05:23.748 tick 100 00:05:23.748 tick 100 00:05:23.748 tick 250 00:05:23.748 tick 100 00:05:23.748 tick 100 00:05:23.748 tick 100 00:05:23.748 tick 250 00:05:23.748 tick 500 00:05:23.748 tick 100 00:05:23.748 tick 100 00:05:23.748 tick 250 00:05:23.748 tick 100 00:05:23.748 tick 100 00:05:23.748 test_end 00:05:23.748 00:05:23.748 real 0m1.158s 00:05:23.748 user 0m1.089s 00:05:23.748 sys 0m0.065s 00:05:23.748 09:59:09 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:23.748 09:59:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.748 ************************************ 00:05:23.748 END TEST event_reactor 00:05:23.748 ************************************ 00:05:23.748 09:59:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.748 09:59:09 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:05:23.748 09:59:09 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:23.748 09:59:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.010 ************************************ 00:05:24.010 START TEST event_reactor_perf 00:05:24.010 ************************************ 00:05:24.010 09:59:09 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.010 [2024-05-15 09:59:09.590621] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:24.010 [2024-05-15 09:59:09.590702] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591894 ] 00:05:24.010 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.010 [2024-05-15 09:59:09.655393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.010 [2024-05-15 09:59:09.690258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.957 test_start 00:05:24.957 test_end 00:05:24.957 Performance: 363000 events per second 00:05:24.957 00:05:24.957 real 0m1.161s 00:05:24.957 user 0m1.085s 00:05:24.957 sys 0m0.071s 00:05:24.957 09:59:10 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:24.957 09:59:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.957 ************************************ 00:05:24.957 END TEST event_reactor_perf 00:05:24.957 ************************************ 00:05:25.219 09:59:10 event -- event/event.sh@49 -- # uname -s 00:05:25.219 09:59:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.219 09:59:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.219 09:59:10 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:25.219 09:59:10 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.219 09:59:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.219 ************************************ 00:05:25.219 START TEST event_scheduler 00:05:25.219 ************************************ 00:05:25.219 09:59:10 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.219 * Looking for test storage... 00:05:25.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:25.219 09:59:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.219 09:59:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2592235 00:05:25.219 09:59:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.219 09:59:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.219 09:59:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2592235 00:05:25.219 09:59:10 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 2592235 ']' 00:05:25.219 09:59:10 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.219 09:59:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:25.219 09:59:10 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.220 09:59:10 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:25.220 09:59:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.220 [2024-05-15 09:59:10.970128] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:25.220 [2024-05-15 09:59:10.970189] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592235 ] 00:05:25.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.482 [2024-05-15 09:59:11.023071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.482 [2024-05-15 09:59:11.056251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.482 [2024-05-15 09:59:11.056387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.482 [2024-05-15 09:59:11.056686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.482 [2024-05-15 09:59:11.056687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:05:25.482 09:59:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.482 POWER: Env isn't set yet! 00:05:25.482 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:25.482 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.482 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.482 POWER: Attempting to initialise PSTAT power management... 00:05:25.482 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:25.482 POWER: Initialized successfully for lcore 0 power management 00:05:25.482 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:25.482 POWER: Initialized successfully for lcore 1 power management 00:05:25.482 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:25.482 POWER: Initialized successfully for lcore 2 power management 00:05:25.482 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:25.482 POWER: Initialized successfully for lcore 3 power management 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.482 09:59:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.482 [2024-05-15 09:59:11.173994] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.482 09:59:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:25.482 09:59:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.482 ************************************ 00:05:25.482 START TEST scheduler_create_thread 00:05:25.482 ************************************ 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.482 2 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.482 3 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.482 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.483 4 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.483 5 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.483 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.745 6 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.745 7 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.745 8 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:25.745 09:59:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.135 9 00:05:27.135 09:59:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.135 09:59:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.135 09:59:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.135 09:59:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.710 10 00:05:27.710 09:59:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:27.710 09:59:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.710 09:59:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:27.710 09:59:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.656 09:59:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:28.656 09:59:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.656 09:59:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.656 09:59:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:28.656 09:59:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.230 09:59:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:29.230 09:59:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.230 09:59:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.230 09:59:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.803 09:59:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:29.803 09:59:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:29.803 09:59:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:29.803 09:59:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:29.803 09:59:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.432 09:59:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:30.432 00:05:30.432 real 0m4.725s 00:05:30.432 user 0m0.026s 00:05:30.432 sys 0m0.005s 00:05:30.432 09:59:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:30.432 09:59:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.432 ************************************ 00:05:30.432 END TEST scheduler_create_thread 00:05:30.432 ************************************ 00:05:30.432 09:59:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:30.432 09:59:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2592235 00:05:30.432 09:59:15 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 2592235 ']' 00:05:30.432 09:59:15 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 2592235 00:05:30.432 09:59:15 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:05:30.432 09:59:15 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:30.432 09:59:15 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2592235 00:05:30.432 09:59:16 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:05:30.432 09:59:16 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:05:30.432 09:59:16 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2592235' 00:05:30.432 killing process with pid 2592235 00:05:30.432 09:59:16 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 2592235 00:05:30.432 09:59:16 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 2592235 00:05:30.432 [2024-05-15 09:59:16.090940] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:30.695 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:30.695 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:30.695 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:30.695 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:30.695 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:30.695 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:30.695 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:30.695 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:30.695 00:05:30.695 real 0m5.464s 00:05:30.695 user 0m12.617s 00:05:30.695 sys 0m0.299s 00:05:30.695 09:59:16 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:30.696 09:59:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.696 ************************************ 00:05:30.696 END TEST event_scheduler 00:05:30.696 ************************************ 00:05:30.696 09:59:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:30.696 09:59:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:30.696 09:59:16 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:30.696 09:59:16 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:30.696 09:59:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.696 ************************************ 00:05:30.696 START TEST app_repeat 00:05:30.696 ************************************ 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2593381 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2593381' 00:05:30.696 Process app_repeat pid: 2593381 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:30.696 spdk_app_start Round 0 00:05:30.696 09:59:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2593381 /var/tmp/spdk-nbd.sock 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2593381 ']' 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:30.696 09:59:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.696 [2024-05-15 09:59:16.409094] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:30.696 [2024-05-15 09:59:16.409178] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593381 ] 00:05:30.696 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.696 [2024-05-15 09:59:16.480279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.958 [2024-05-15 09:59:16.519378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.958 [2024-05-15 09:59:16.519570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.958 09:59:16 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:30.958 09:59:16 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:30.958 09:59:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.958 Malloc0 00:05:30.958 09:59:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.221 Malloc1 00:05:31.221 09:59:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.221 09:59:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.483 /dev/nbd0 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.483 1+0 records in 00:05:31.483 1+0 records out 00:05:31.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200371 s, 20.4 MB/s 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.483 /dev/nbd1 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.483 09:59:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:31.483 09:59:17 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:31.484 09:59:17 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.484 1+0 records in 00:05:31.484 1+0 records out 00:05:31.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270407 s, 15.1 MB/s 00:05:31.484 09:59:17 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.484 09:59:17 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:31.484 09:59:17 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.746 09:59:17 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:31.746 09:59:17 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.746 { 00:05:31.746 "nbd_device": "/dev/nbd0", 00:05:31.746 "bdev_name": "Malloc0" 00:05:31.746 }, 00:05:31.746 { 00:05:31.746 "nbd_device": "/dev/nbd1", 00:05:31.746 "bdev_name": "Malloc1" 00:05:31.746 } 00:05:31.746 ]' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.746 { 00:05:31.746 "nbd_device": "/dev/nbd0", 00:05:31.746 "bdev_name": "Malloc0" 00:05:31.746 }, 00:05:31.746 { 00:05:31.746 "nbd_device": "/dev/nbd1", 00:05:31.746 "bdev_name": "Malloc1" 00:05:31.746 } 00:05:31.746 ]' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.746 /dev/nbd1' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.746 /dev/nbd1' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.746 256+0 records in 00:05:31.746 256+0 records out 00:05:31.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124621 s, 84.1 MB/s 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.746 256+0 records in 00:05:31.746 256+0 records out 00:05:31.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159757 s, 65.6 MB/s 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.746 09:59:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.008 256+0 records in 00:05:32.008 256+0 records out 00:05:32.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175041 s, 59.9 MB/s 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.008 09:59:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.009 09:59:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.271 09:59:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.533 09:59:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.533 09:59:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.533 09:59:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.795 [2024-05-15 09:59:18.410571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.795 [2024-05-15 09:59:18.441753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.795 [2024-05-15 09:59:18.441756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.795 [2024-05-15 09:59:18.473557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.795 [2024-05-15 09:59:18.473592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.111 09:59:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.111 09:59:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:36.111 spdk_app_start Round 1 00:05:36.111 09:59:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2593381 /var/tmp/spdk-nbd.sock 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2593381 ']' 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:36.111 09:59:21 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:36.111 09:59:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.111 Malloc0 00:05:36.111 09:59:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.111 Malloc1 00:05:36.111 09:59:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.111 09:59:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.373 /dev/nbd0 00:05:36.373 09:59:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.373 09:59:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.373 1+0 records in 00:05:36.373 1+0 records out 00:05:36.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276902 s, 14.8 MB/s 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:36.373 09:59:21 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:36.373 09:59:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.373 09:59:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.373 09:59:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.373 /dev/nbd1 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.373 1+0 records in 00:05:36.373 1+0 records out 00:05:36.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242069 s, 16.9 MB/s 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:36.373 09:59:22 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.373 09:59:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.636 { 00:05:36.636 "nbd_device": "/dev/nbd0", 00:05:36.636 "bdev_name": "Malloc0" 00:05:36.636 }, 00:05:36.636 { 00:05:36.636 "nbd_device": "/dev/nbd1", 00:05:36.636 "bdev_name": "Malloc1" 00:05:36.636 } 00:05:36.636 ]' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.636 { 00:05:36.636 "nbd_device": "/dev/nbd0", 00:05:36.636 "bdev_name": "Malloc0" 00:05:36.636 }, 00:05:36.636 { 00:05:36.636 "nbd_device": "/dev/nbd1", 00:05:36.636 "bdev_name": "Malloc1" 00:05:36.636 } 00:05:36.636 ]' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.636 /dev/nbd1' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.636 /dev/nbd1' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.636 256+0 records in 00:05:36.636 256+0 records out 00:05:36.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116753 s, 89.8 MB/s 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.636 256+0 records in 00:05:36.636 256+0 records out 00:05:36.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157379 s, 66.6 MB/s 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.636 256+0 records in 00:05:36.636 256+0 records out 00:05:36.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184022 s, 57.0 MB/s 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.636 09:59:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.898 09:59:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.162 09:59:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.425 09:59:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.425 09:59:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.425 09:59:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.687 [2024-05-15 09:59:23.269366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.687 [2024-05-15 09:59:23.300045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.687 [2024-05-15 09:59:23.300047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.687 [2024-05-15 09:59:23.332536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.687 [2024-05-15 09:59:23.332572] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.995 09:59:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.995 09:59:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:40.995 spdk_app_start Round 2 00:05:40.995 09:59:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2593381 /var/tmp/spdk-nbd.sock 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2593381 ']' 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:40.995 09:59:26 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:40.995 09:59:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.995 Malloc0 00:05:40.995 09:59:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.995 Malloc1 00:05:40.995 09:59:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.995 09:59:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.996 09:59:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.996 09:59:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.996 09:59:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.996 09:59:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.996 09:59:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.258 /dev/nbd0 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.258 1+0 records in 00:05:41.258 1+0 records out 00:05:41.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026602 s, 15.4 MB/s 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:41.258 09:59:26 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.258 /dev/nbd1 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.258 09:59:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.258 1+0 records in 00:05:41.258 1+0 records out 00:05:41.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294721 s, 13.9 MB/s 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:05:41.258 09:59:27 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:05:41.258 09:59:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.258 09:59:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.258 09:59:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.258 09:59:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.258 09:59:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.520 { 00:05:41.520 "nbd_device": "/dev/nbd0", 00:05:41.520 "bdev_name": "Malloc0" 00:05:41.520 }, 00:05:41.520 { 00:05:41.520 "nbd_device": "/dev/nbd1", 00:05:41.520 "bdev_name": "Malloc1" 00:05:41.520 } 00:05:41.520 ]' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.520 { 00:05:41.520 "nbd_device": "/dev/nbd0", 00:05:41.520 "bdev_name": "Malloc0" 00:05:41.520 }, 00:05:41.520 { 00:05:41.520 "nbd_device": "/dev/nbd1", 00:05:41.520 "bdev_name": "Malloc1" 00:05:41.520 } 00:05:41.520 ]' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.520 /dev/nbd1' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.520 /dev/nbd1' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.520 256+0 records in 00:05:41.520 256+0 records out 00:05:41.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124514 s, 84.2 MB/s 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.520 256+0 records in 00:05:41.520 256+0 records out 00:05:41.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168702 s, 62.2 MB/s 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.520 256+0 records in 00:05:41.520 256+0 records out 00:05:41.520 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176557 s, 59.4 MB/s 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.520 09:59:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.782 09:59:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.044 09:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.306 09:59:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.306 09:59:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.306 09:59:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.567 [2024-05-15 09:59:28.163281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.567 [2024-05-15 09:59:28.193794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.567 [2024-05-15 09:59:28.193798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.567 [2024-05-15 09:59:28.225531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.567 [2024-05-15 09:59:28.225578] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.874 09:59:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2593381 /var/tmp/spdk-nbd.sock 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 2593381 ']' 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:05:45.874 09:59:31 event.app_repeat -- event/event.sh@39 -- # killprocess 2593381 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 2593381 ']' 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 2593381 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2593381 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2593381' 00:05:45.874 killing process with pid 2593381 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@966 -- # kill 2593381 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@971 -- # wait 2593381 00:05:45.874 spdk_app_start is called in Round 0. 00:05:45.874 Shutdown signal received, stop current app iteration 00:05:45.874 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:05:45.874 spdk_app_start is called in Round 1. 00:05:45.874 Shutdown signal received, stop current app iteration 00:05:45.874 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:05:45.874 spdk_app_start is called in Round 2. 00:05:45.874 Shutdown signal received, stop current app iteration 00:05:45.874 Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 reinitialization... 00:05:45.874 spdk_app_start is called in Round 3. 00:05:45.874 Shutdown signal received, stop current app iteration 00:05:45.874 09:59:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.874 09:59:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.874 00:05:45.874 real 0m14.985s 00:05:45.874 user 0m32.524s 00:05:45.874 sys 0m2.052s 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:45.874 09:59:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 ************************************ 00:05:45.874 END TEST app_repeat 00:05:45.874 ************************************ 00:05:45.874 09:59:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.874 09:59:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.874 09:59:31 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:45.874 09:59:31 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:45.874 09:59:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 ************************************ 00:05:45.874 START TEST cpu_locks 00:05:45.874 ************************************ 00:05:45.874 09:59:31 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:45.874 * Looking for test storage... 00:05:45.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:45.874 09:59:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.874 09:59:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.874 09:59:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.874 09:59:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.874 09:59:31 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:45.874 09:59:31 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:45.874 09:59:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 ************************************ 00:05:45.874 START TEST default_locks 00:05:45.874 ************************************ 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2596646 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2596646 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2596646 ']' 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:45.874 09:59:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.874 [2024-05-15 09:59:31.641159] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:45.875 [2024-05-15 09:59:31.641223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596646 ] 00:05:46.136 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.136 [2024-05-15 09:59:31.705320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.136 [2024-05-15 09:59:31.744528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.708 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:46.708 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:05:46.708 09:59:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2596646 00:05:46.708 09:59:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2596646 00:05:46.708 09:59:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.282 lslocks: write error 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2596646 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 2596646 ']' 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 2596646 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2596646 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2596646' 00:05:47.282 killing process with pid 2596646 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 2596646 00:05:47.282 09:59:32 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 2596646 00:05:47.282 09:59:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2596646 00:05:47.282 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:47.282 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2596646 00:05:47.282 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 2596646 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 2596646 ']' 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2596646) - No such process 00:05:47.551 ERROR: process (pid: 2596646) is no longer running 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.551 00:05:47.551 real 0m1.499s 00:05:47.551 user 0m1.566s 00:05:47.551 sys 0m0.544s 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:47.551 09:59:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.551 ************************************ 00:05:47.552 END TEST default_locks 00:05:47.552 ************************************ 00:05:47.552 09:59:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.552 09:59:33 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:47.552 09:59:33 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:47.552 09:59:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.552 ************************************ 00:05:47.552 START TEST default_locks_via_rpc 00:05:47.552 ************************************ 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2596970 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2596970 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2596970 ']' 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:47.552 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.552 [2024-05-15 09:59:33.230489] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:47.552 [2024-05-15 09:59:33.230548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596970 ] 00:05:47.552 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.552 [2024-05-15 09:59:33.293242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.552 [2024-05-15 09:59:33.329500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2596970 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2596970 00:05:48.550 09:59:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2596970 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 2596970 ']' 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 2596970 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2596970 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2596970' 00:05:48.831 killing process with pid 2596970 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 2596970 00:05:48.831 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 2596970 00:05:49.094 00:05:49.094 real 0m1.489s 00:05:49.094 user 0m1.578s 00:05:49.094 sys 0m0.510s 00:05:49.094 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:49.094 09:59:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 ************************************ 00:05:49.094 END TEST default_locks_via_rpc 00:05:49.094 ************************************ 00:05:49.094 09:59:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.094 09:59:34 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:49.094 09:59:34 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:49.094 09:59:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.094 ************************************ 00:05:49.094 START TEST non_locking_app_on_locked_coremask 00:05:49.094 ************************************ 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2597289 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2597289 /var/tmp/spdk.sock 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2597289 ']' 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.094 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:49.095 09:59:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.095 [2024-05-15 09:59:34.787030] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:49.095 [2024-05-15 09:59:34.787086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597289 ] 00:05:49.095 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.095 [2024-05-15 09:59:34.850790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.095 [2024-05-15 09:59:34.887127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2597610 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2597610 /var/tmp/spdk2.sock 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2597610 ']' 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:50.042 09:59:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.042 [2024-05-15 09:59:35.624062] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:50.042 [2024-05-15 09:59:35.624117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597610 ] 00:05:50.042 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.042 [2024-05-15 09:59:35.713097] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.042 [2024-05-15 09:59:35.713128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.042 [2024-05-15 09:59:35.776015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.617 09:59:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:50.617 09:59:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:50.617 09:59:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2597289 00:05:50.617 09:59:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2597289 00:05:50.617 09:59:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.562 lslocks: write error 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2597289 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2597289 ']' 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2597289 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2597289 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:51.562 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2597289' 00:05:51.562 killing process with pid 2597289 00:05:51.563 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2597289 00:05:51.563 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2597289 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2597610 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2597610 ']' 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2597610 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2597610 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2597610' 00:05:51.825 killing process with pid 2597610 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2597610 00:05:51.825 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2597610 00:05:52.087 00:05:52.087 real 0m2.940s 00:05:52.087 user 0m3.205s 00:05:52.087 sys 0m0.861s 00:05:52.087 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:52.087 09:59:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.087 ************************************ 00:05:52.087 END TEST non_locking_app_on_locked_coremask 00:05:52.087 ************************************ 00:05:52.087 09:59:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.087 09:59:37 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:52.087 09:59:37 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:52.087 09:59:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.087 ************************************ 00:05:52.087 START TEST locking_app_on_unlocked_coremask 00:05:52.087 ************************************ 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2597983 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2597983 /var/tmp/spdk.sock 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2597983 ']' 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:52.087 09:59:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.087 [2024-05-15 09:59:37.806666] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:52.087 [2024-05-15 09:59:37.806712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597983 ] 00:05:52.087 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.087 [2024-05-15 09:59:37.865914] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.087 [2024-05-15 09:59:37.865941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.348 [2024-05-15 09:59:37.896042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2597997 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2597997 /var/tmp/spdk2.sock 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2597997 ']' 00:05:52.348 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.349 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:52.349 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.349 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:52.349 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.349 [2024-05-15 09:59:38.112876] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:52.349 [2024-05-15 09:59:38.112945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597997 ] 00:05:52.349 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.610 [2024-05-15 09:59:38.202496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.610 [2024-05-15 09:59:38.266421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.183 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:53.183 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:53.183 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2597997 00:05:53.183 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2597997 00:05:53.183 09:59:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.754 lslocks: write error 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2597983 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2597983 ']' 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2597983 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2597983 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2597983' 00:05:53.754 killing process with pid 2597983 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2597983 00:05:53.754 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2597983 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2597997 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2597997 ']' 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 2597997 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2597997 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2597997' 00:05:54.327 killing process with pid 2597997 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 2597997 00:05:54.327 09:59:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 2597997 00:05:54.588 00:05:54.588 real 0m2.427s 00:05:54.588 user 0m2.643s 00:05:54.588 sys 0m0.881s 00:05:54.588 09:59:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:54.588 09:59:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.588 ************************************ 00:05:54.589 END TEST locking_app_on_unlocked_coremask 00:05:54.589 ************************************ 00:05:54.589 09:59:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:54.589 09:59:40 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:54.589 09:59:40 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:54.589 09:59:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.589 ************************************ 00:05:54.589 START TEST locking_app_on_locked_coremask 00:05:54.589 ************************************ 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2598607 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2598607 /var/tmp/spdk.sock 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2598607 ']' 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:54.589 09:59:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.589 [2024-05-15 09:59:40.312840] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:54.589 [2024-05-15 09:59:40.312890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598607 ] 00:05:54.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.589 [2024-05-15 09:59:40.373657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.850 [2024-05-15 09:59:40.407782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2598704 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2598704 /var/tmp/spdk2.sock 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2598704 /var/tmp/spdk2.sock 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2598704 /var/tmp/spdk2.sock 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 2598704 ']' 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:55.425 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.425 [2024-05-15 09:59:41.129490] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:55.425 [2024-05-15 09:59:41.129546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598704 ] 00:05:55.425 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.425 [2024-05-15 09:59:41.215856] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2598607 has claimed it. 00:05:55.425 [2024-05-15 09:59:41.215894] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2598704) - No such process 00:05:55.999 ERROR: process (pid: 2598704) is no longer running 00:05:55.999 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2598607 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2598607 00:05:56.000 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.262 lslocks: write error 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2598607 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 2598607 ']' 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 2598607 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2598607 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2598607' 00:05:56.262 killing process with pid 2598607 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 2598607 00:05:56.262 09:59:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 2598607 00:05:56.524 00:05:56.524 real 0m1.914s 00:05:56.524 user 0m2.145s 00:05:56.524 sys 0m0.483s 00:05:56.524 09:59:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:56.524 09:59:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.524 ************************************ 00:05:56.524 END TEST locking_app_on_locked_coremask 00:05:56.524 ************************************ 00:05:56.524 09:59:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.524 09:59:42 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:56.524 09:59:42 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:56.524 09:59:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.524 ************************************ 00:05:56.524 START TEST locking_overlapped_coremask 00:05:56.524 ************************************ 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2599058 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2599058 /var/tmp/spdk.sock 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2599058 ']' 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:56.524 09:59:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.524 [2024-05-15 09:59:42.302872] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:56.524 [2024-05-15 09:59:42.302926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599058 ] 00:05:56.787 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.787 [2024-05-15 09:59:42.368719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.787 [2024-05-15 09:59:42.408211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.787 [2024-05-15 09:59:42.408231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.787 [2024-05-15 09:59:42.408237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2599079 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2599079 /var/tmp/spdk2.sock 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 2599079 /var/tmp/spdk2.sock 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 2599079 /var/tmp/spdk2.sock 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 2599079 ']' 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:57.361 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.361 [2024-05-15 09:59:43.134393] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:57.361 [2024-05-15 09:59:43.134446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599079 ] 00:05:57.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.622 [2024-05-15 09:59:43.205599] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2599058 has claimed it. 00:05:57.622 [2024-05-15 09:59:43.205630] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (2599079) - No such process 00:05:58.194 ERROR: process (pid: 2599079) is no longer running 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2599058 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 2599058 ']' 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 2599058 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2599058 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2599058' 00:05:58.194 killing process with pid 2599058 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 2599058 00:05:58.194 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 2599058 00:05:58.456 00:05:58.456 real 0m1.746s 00:05:58.456 user 0m5.019s 00:05:58.456 sys 0m0.380s 00:05:58.456 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:58.456 09:59:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.456 ************************************ 00:05:58.456 END TEST locking_overlapped_coremask 00:05:58.456 ************************************ 00:05:58.456 09:59:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.456 09:59:44 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:58.456 09:59:44 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:58.456 09:59:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.456 ************************************ 00:05:58.456 START TEST locking_overlapped_coremask_via_rpc 00:05:58.456 ************************************ 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2599433 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2599433 /var/tmp/spdk.sock 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2599433 ']' 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:58.456 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.456 [2024-05-15 09:59:44.132522] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:58.457 [2024-05-15 09:59:44.132574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599433 ] 00:05:58.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.457 [2024-05-15 09:59:44.192118] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.457 [2024-05-15 09:59:44.192147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.457 [2024-05-15 09:59:44.224490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.457 [2024-05-15 09:59:44.224687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.457 [2024-05-15 09:59:44.224690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2599484 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2599484 /var/tmp/spdk2.sock 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2599484 ']' 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:59.402 09:59:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.402 [2024-05-15 09:59:44.955163] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:05:59.402 [2024-05-15 09:59:44.955216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2599484 ] 00:05:59.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.402 [2024-05-15 09:59:45.024864] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.402 [2024-05-15 09:59:45.024889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.402 [2024-05-15 09:59:45.086592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.402 [2024-05-15 09:59:45.090412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.402 [2024-05-15 09:59:45.090415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.976 [2024-05-15 09:59:45.738347] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2599433 has claimed it. 00:05:59.976 request: 00:05:59.976 { 00:05:59.976 "method": "framework_enable_cpumask_locks", 00:05:59.976 "req_id": 1 00:05:59.976 } 00:05:59.976 Got JSON-RPC error response 00:05:59.976 response: 00:05:59.976 { 00:05:59.976 "code": -32603, 00:05:59.976 "message": "Failed to claim CPU core: 2" 00:05:59.976 } 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:59.976 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2599433 /var/tmp/spdk.sock 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2599433 ']' 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:59.977 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2599484 /var/tmp/spdk2.sock 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 2599484 ']' 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:00.239 09:59:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.501 00:06:00.501 real 0m2.009s 00:06:00.501 user 0m0.782s 00:06:00.501 sys 0m0.152s 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:00.501 09:59:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.501 ************************************ 00:06:00.501 END TEST locking_overlapped_coremask_via_rpc 00:06:00.501 ************************************ 00:06:00.501 09:59:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:00.501 09:59:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2599433 ]] 00:06:00.501 09:59:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2599433 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2599433 ']' 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2599433 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2599433 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2599433' 00:06:00.501 killing process with pid 2599433 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2599433 00:06:00.501 09:59:46 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2599433 00:06:00.763 09:59:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2599484 ]] 00:06:00.763 09:59:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2599484 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2599484 ']' 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2599484 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2599484 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2599484' 00:06:00.763 killing process with pid 2599484 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 2599484 00:06:00.763 09:59:46 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 2599484 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2599433 ]] 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2599433 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2599433 ']' 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2599433 00:06:01.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2599433) - No such process 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2599433 is not found' 00:06:01.025 Process with pid 2599433 is not found 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2599484 ]] 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2599484 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 2599484 ']' 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 2599484 00:06:01.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2599484) - No such process 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 2599484 is not found' 00:06:01.025 Process with pid 2599484 is not found 00:06:01.025 09:59:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.025 00:06:01.025 real 0m15.182s 00:06:01.025 user 0m26.642s 00:06:01.025 sys 0m4.667s 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.025 09:59:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.025 ************************************ 00:06:01.025 END TEST cpu_locks 00:06:01.025 ************************************ 00:06:01.025 00:06:01.025 real 0m39.716s 00:06:01.025 user 1m18.269s 00:06:01.025 sys 0m7.617s 00:06:01.026 09:59:46 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:01.026 09:59:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.026 ************************************ 00:06:01.026 END TEST event 00:06:01.026 ************************************ 00:06:01.026 09:59:46 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:01.026 09:59:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:01.026 09:59:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.026 09:59:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.026 ************************************ 00:06:01.026 START TEST thread 00:06:01.026 ************************************ 00:06:01.026 09:59:46 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:01.026 * Looking for test storage... 00:06:01.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:01.026 09:59:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.026 09:59:46 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:01.026 09:59:46 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:01.026 09:59:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.288 ************************************ 00:06:01.288 START TEST thread_poller_perf 00:06:01.288 ************************************ 00:06:01.288 09:59:46 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.288 [2024-05-15 09:59:46.865572] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:01.288 [2024-05-15 09:59:46.865653] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600059 ] 00:06:01.288 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.288 [2024-05-15 09:59:46.929340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.288 [2024-05-15 09:59:46.962226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.288 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.233 ====================================== 00:06:02.233 busy:2409736354 (cyc) 00:06:02.233 total_run_count: 287000 00:06:02.233 tsc_hz: 2400000000 (cyc) 00:06:02.233 ====================================== 00:06:02.233 poller_cost: 8396 (cyc), 3498 (nsec) 00:06:02.233 00:06:02.233 real 0m1.165s 00:06:02.233 user 0m1.087s 00:06:02.233 sys 0m0.075s 00:06:02.233 09:59:48 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:02.233 09:59:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.233 ************************************ 00:06:02.233 END TEST thread_poller_perf 00:06:02.233 ************************************ 00:06:02.495 09:59:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.495 09:59:48 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:02.495 09:59:48 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:02.495 09:59:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.495 ************************************ 00:06:02.495 START TEST thread_poller_perf 00:06:02.495 ************************************ 00:06:02.495 09:59:48 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.495 [2024-05-15 09:59:48.107587] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:02.495 [2024-05-15 09:59:48.107669] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600239 ] 00:06:02.495 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.495 [2024-05-15 09:59:48.172471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.495 [2024-05-15 09:59:48.204596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.495 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.920 ====================================== 00:06:03.920 busy:2402006242 (cyc) 00:06:03.920 total_run_count: 3812000 00:06:03.920 tsc_hz: 2400000000 (cyc) 00:06:03.920 ====================================== 00:06:03.920 poller_cost: 630 (cyc), 262 (nsec) 00:06:03.920 00:06:03.920 real 0m1.158s 00:06:03.920 user 0m1.078s 00:06:03.920 sys 0m0.075s 00:06:03.920 09:59:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.920 09:59:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.920 ************************************ 00:06:03.920 END TEST thread_poller_perf 00:06:03.920 ************************************ 00:06:03.920 09:59:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.920 00:06:03.920 real 0m2.543s 00:06:03.920 user 0m2.226s 00:06:03.920 sys 0m0.313s 00:06:03.920 09:59:49 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:03.920 09:59:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.920 ************************************ 00:06:03.920 END TEST thread 00:06:03.920 ************************************ 00:06:03.920 09:59:49 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:03.920 09:59:49 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:03.920 09:59:49 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:03.920 09:59:49 -- common/autotest_common.sh@10 -- # set +x 00:06:03.920 ************************************ 00:06:03.920 START TEST accel 00:06:03.920 ************************************ 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:03.920 * Looking for test storage... 00:06:03.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:03.920 09:59:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:03.920 09:59:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:03.920 09:59:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.920 09:59:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2600632 00:06:03.920 09:59:49 accel -- accel/accel.sh@63 -- # waitforlisten 2600632 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@828 -- # '[' -z 2600632 ']' 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:03.920 09:59:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:03.920 09:59:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:03.920 09:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.920 09:59:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.920 09:59:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.920 09:59:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.920 09:59:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.920 09:59:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.920 09:59:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:03.920 09:59:49 accel -- accel/accel.sh@41 -- # jq -r . 00:06:03.920 [2024-05-15 09:59:49.513757] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:03.920 [2024-05-15 09:59:49.513826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600632 ] 00:06:03.920 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.920 [2024-05-15 09:59:49.579440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.920 [2024-05-15 09:59:49.617762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.866 09:59:50 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:04.866 09:59:50 accel -- common/autotest_common.sh@861 -- # return 0 00:06:04.866 09:59:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:04.866 09:59:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:04.866 09:59:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:04.866 09:59:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:04.866 09:59:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:04.866 09:59:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:04.866 09:59:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:04.866 09:59:50 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:04.866 09:59:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.866 09:59:50 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:04.866 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.866 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.866 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.866 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.866 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.866 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.866 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.866 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.866 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:04.867 09:59:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:04.867 09:59:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:04.867 09:59:50 accel -- accel/accel.sh@75 -- # killprocess 2600632 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@947 -- # '[' -z 2600632 ']' 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@951 -- # kill -0 2600632 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@952 -- # uname 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2600632 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2600632' 00:06:04.867 killing process with pid 2600632 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@966 -- # kill 2600632 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@971 -- # wait 2600632 00:06:04.867 09:59:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:04.867 09:59:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:04.867 09:59:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.867 09:59:50 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.867 09:59:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:05.129 09:59:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:05.129 09:59:50 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.129 09:59:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:05.129 09:59:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.129 09:59:50 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:05.129 09:59:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.129 09:59:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.129 ************************************ 00:06:05.129 START TEST accel_missing_filename 00:06:05.129 ************************************ 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.129 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:05.129 09:59:50 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:05.129 [2024-05-15 09:59:50.796960] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:05.129 [2024-05-15 09:59:50.797046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601002 ] 00:06:05.129 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.129 [2024-05-15 09:59:50.859017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.129 [2024-05-15 09:59:50.889305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.129 [2024-05-15 09:59:50.921328] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.391 [2024-05-15 09:59:50.958204] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:05.391 A filename is required. 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.391 00:06:05.391 real 0m0.231s 00:06:05.391 user 0m0.174s 00:06:05.391 sys 0m0.099s 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.391 09:59:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:05.391 ************************************ 00:06:05.391 END TEST accel_missing_filename 00:06:05.391 ************************************ 00:06:05.391 09:59:51 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.391 09:59:51 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:05.391 09:59:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.391 09:59:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.391 ************************************ 00:06:05.391 START TEST accel_compress_verify 00:06:05.391 ************************************ 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.391 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:05.391 09:59:51 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:05.391 [2024-05-15 09:59:51.095790] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:05.391 [2024-05-15 09:59:51.095840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601025 ] 00:06:05.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.391 [2024-05-15 09:59:51.152007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.391 [2024-05-15 09:59:51.181619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.654 [2024-05-15 09:59:51.213640] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.654 [2024-05-15 09:59:51.250472] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:05.654 00:06:05.654 Compression does not support the verify option, aborting. 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.654 00:06:05.654 real 0m0.211s 00:06:05.654 user 0m0.156s 00:06:05.654 sys 0m0.095s 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.654 09:59:51 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:05.654 ************************************ 00:06:05.654 END TEST accel_compress_verify 00:06:05.654 ************************************ 00:06:05.654 09:59:51 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:05.654 09:59:51 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:05.654 09:59:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.654 09:59:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.654 ************************************ 00:06:05.654 START TEST accel_wrong_workload 00:06:05.654 ************************************ 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:05.654 09:59:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:05.654 Unsupported workload type: foobar 00:06:05.654 [2024-05-15 09:59:51.400889] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:05.654 accel_perf options: 00:06:05.654 [-h help message] 00:06:05.654 [-q queue depth per core] 00:06:05.654 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.654 [-T number of threads per core 00:06:05.654 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.654 [-t time in seconds] 00:06:05.654 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.654 [ dif_verify, , dif_generate, dif_generate_copy 00:06:05.654 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.654 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.654 [-S for crc32c workload, use this seed value (default 0) 00:06:05.654 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.654 [-f for fill workload, use this BYTE value (default 255) 00:06:05.654 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.654 [-y verify result if this switch is on] 00:06:05.654 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.654 Can be used to spread operations across a wider range of memory. 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.654 00:06:05.654 real 0m0.035s 00:06:05.654 user 0m0.020s 00:06:05.654 sys 0m0.015s 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.654 09:59:51 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:05.654 ************************************ 00:06:05.654 END TEST accel_wrong_workload 00:06:05.654 ************************************ 00:06:05.654 Error: writing output failed: Broken pipe 00:06:05.654 09:59:51 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.654 09:59:51 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:06:05.654 09:59:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.654 09:59:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.920 ************************************ 00:06:05.920 START TEST accel_negative_buffers 00:06:05.920 ************************************ 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:06:05.920 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:05.921 09:59:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:05.921 -x option must be non-negative. 00:06:05.921 [2024-05-15 09:59:51.513617] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:05.921 accel_perf options: 00:06:05.921 [-h help message] 00:06:05.921 [-q queue depth per core] 00:06:05.921 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:05.921 [-T number of threads per core 00:06:05.921 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:05.921 [-t time in seconds] 00:06:05.921 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:05.921 [ dif_verify, , dif_generate, dif_generate_copy 00:06:05.921 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:05.921 [-l for compress/decompress workloads, name of uncompressed input file 00:06:05.921 [-S for crc32c workload, use this seed value (default 0) 00:06:05.921 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:05.921 [-f for fill workload, use this BYTE value (default 255) 00:06:05.921 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:05.921 [-y verify result if this switch is on] 00:06:05.921 [-a tasks to allocate per core (default: same value as -q)] 00:06:05.921 Can be used to spread operations across a wider range of memory. 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:05.921 00:06:05.921 real 0m0.035s 00:06:05.921 user 0m0.021s 00:06:05.921 sys 0m0.013s 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:05.921 09:59:51 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 ************************************ 00:06:05.921 END TEST accel_negative_buffers 00:06:05.921 ************************************ 00:06:05.921 Error: writing output failed: Broken pipe 00:06:05.921 09:59:51 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:05.921 09:59:51 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:05.921 09:59:51 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:05.921 09:59:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.921 ************************************ 00:06:05.921 START TEST accel_crc32c 00:06:05.921 ************************************ 00:06:05.921 09:59:51 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:05.921 09:59:51 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:05.921 [2024-05-15 09:59:51.621558] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:05.921 [2024-05-15 09:59:51.621621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601118 ] 00:06:05.921 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.921 [2024-05-15 09:59:51.684255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.186 [2024-05-15 09:59:51.720361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:06.186 09:59:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.130 09:59:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.130 09:59:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:07.131 09:59:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.131 00:06:07.131 real 0m1.242s 00:06:07.131 user 0m1.147s 00:06:07.131 sys 0m0.108s 00:06:07.131 09:59:52 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:07.131 09:59:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:07.131 ************************************ 00:06:07.131 END TEST accel_crc32c 00:06:07.131 ************************************ 00:06:07.131 09:59:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:07.131 09:59:52 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:07.131 09:59:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:07.131 09:59:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.131 ************************************ 00:06:07.131 START TEST accel_crc32c_C2 00:06:07.131 ************************************ 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.131 09:59:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:07.393 [2024-05-15 09:59:52.939936] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:07.393 [2024-05-15 09:59:52.940001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601438 ] 00:06:07.393 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.393 [2024-05-15 09:59:53.000226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.393 [2024-05-15 09:59:53.032510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.393 09:59:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.782 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.783 00:06:08.783 real 0m1.235s 00:06:08.783 user 0m1.140s 00:06:08.783 sys 0m0.106s 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:08.783 09:59:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:08.783 ************************************ 00:06:08.783 END TEST accel_crc32c_C2 00:06:08.783 ************************************ 00:06:08.783 09:59:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:08.783 09:59:54 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:08.783 09:59:54 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:08.783 09:59:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.783 ************************************ 00:06:08.783 START TEST accel_copy 00:06:08.783 ************************************ 00:06:08.783 09:59:54 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:08.783 [2024-05-15 09:59:54.249123] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:08.783 [2024-05-15 09:59:54.249209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2601788 ] 00:06:08.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.783 [2024-05-15 09:59:54.310536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.783 [2024-05-15 09:59:54.342345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.783 09:59:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:09.728 09:59:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.728 00:06:09.728 real 0m1.238s 00:06:09.728 user 0m1.152s 00:06:09.728 sys 0m0.096s 00:06:09.728 09:59:55 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:09.728 09:59:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:09.728 ************************************ 00:06:09.728 END TEST accel_copy 00:06:09.728 ************************************ 00:06:09.728 09:59:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.728 09:59:55 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:09.728 09:59:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:09.728 09:59:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.990 ************************************ 00:06:09.990 START TEST accel_fill 00:06:09.990 ************************************ 00:06:09.990 09:59:55 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:09.990 [2024-05-15 09:59:55.544338] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:09.990 [2024-05-15 09:59:55.544373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602144 ] 00:06:09.990 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.990 [2024-05-15 09:59:55.594406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.990 [2024-05-15 09:59:55.624282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.990 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:09.991 09:59:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:11.380 09:59:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.380 00:06:11.380 real 0m1.206s 00:06:11.380 user 0m1.125s 00:06:11.380 sys 0m0.093s 00:06:11.380 09:59:56 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:11.380 09:59:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 ************************************ 00:06:11.380 END TEST accel_fill 00:06:11.380 ************************************ 00:06:11.380 09:59:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:11.380 09:59:56 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:11.380 09:59:56 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:11.380 09:59:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 ************************************ 00:06:11.380 START TEST accel_copy_crc32c 00:06:11.380 ************************************ 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:11.380 [2024-05-15 09:59:56.844363] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:11.380 [2024-05-15 09:59:56.844446] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602322 ] 00:06:11.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.380 [2024-05-15 09:59:56.908114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.380 [2024-05-15 09:59:56.942248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:11.380 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.381 09:59:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.326 00:06:12.326 real 0m1.243s 00:06:12.326 user 0m1.148s 00:06:12.326 sys 0m0.106s 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:12.326 09:59:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:12.326 ************************************ 00:06:12.326 END TEST accel_copy_crc32c 00:06:12.326 ************************************ 00:06:12.326 09:59:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.326 09:59:58 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:12.326 09:59:58 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:12.326 09:59:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.588 ************************************ 00:06:12.588 START TEST accel_copy_crc32c_C2 00:06:12.588 ************************************ 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:12.588 [2024-05-15 09:59:58.162276] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:12.588 [2024-05-15 09:59:58.162343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602528 ] 00:06:12.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.588 [2024-05-15 09:59:58.224528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.588 [2024-05-15 09:59:58.256900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.588 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.589 09:59:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.973 00:06:13.973 real 0m1.239s 00:06:13.973 user 0m1.151s 00:06:13.973 sys 0m0.100s 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:13.973 09:59:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:13.973 ************************************ 00:06:13.973 END TEST accel_copy_crc32c_C2 00:06:13.973 ************************************ 00:06:13.973 09:59:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:13.973 09:59:59 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:13.973 09:59:59 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:13.973 09:59:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.973 ************************************ 00:06:13.973 START TEST accel_dualcast 00:06:13.973 ************************************ 00:06:13.973 09:59:59 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:13.973 [2024-05-15 09:59:59.479295] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:13.973 [2024-05-15 09:59:59.479356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602883 ] 00:06:13.973 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.973 [2024-05-15 09:59:59.539108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.973 [2024-05-15 09:59:59.570349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:13.973 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:13.974 09:59:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:14.918 10:00:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.918 00:06:14.918 real 0m1.236s 00:06:14.918 user 0m1.147s 00:06:14.918 sys 0m0.099s 00:06:14.918 10:00:00 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:14.918 10:00:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:14.918 ************************************ 00:06:14.918 END TEST accel_dualcast 00:06:14.918 ************************************ 00:06:15.179 10:00:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:15.179 10:00:00 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:15.179 10:00:00 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:15.179 10:00:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.179 ************************************ 00:06:15.179 START TEST accel_compare 00:06:15.179 ************************************ 00:06:15.179 10:00:00 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:15.179 10:00:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:15.179 [2024-05-15 10:00:00.793232] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:15.180 [2024-05-15 10:00:00.793331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603271 ] 00:06:15.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.180 [2024-05-15 10:00:00.856323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.180 [2024-05-15 10:00:00.892097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.180 10:00:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:16.566 10:00:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.566 00:06:16.566 real 0m1.245s 00:06:16.566 user 0m1.153s 00:06:16.566 sys 0m0.103s 00:06:16.566 10:00:02 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:16.566 10:00:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:16.566 ************************************ 00:06:16.566 END TEST accel_compare 00:06:16.566 ************************************ 00:06:16.566 10:00:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:16.566 10:00:02 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:06:16.566 10:00:02 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:16.566 10:00:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.566 ************************************ 00:06:16.566 START TEST accel_xor 00:06:16.566 ************************************ 00:06:16.566 10:00:02 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:16.566 [2024-05-15 10:00:02.113496] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:16.566 [2024-05-15 10:00:02.113558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603628 ] 00:06:16.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.566 [2024-05-15 10:00:02.175201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.566 [2024-05-15 10:00:02.209454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.566 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:16.567 10:00:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.951 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.952 00:06:17.952 real 0m1.240s 00:06:17.952 user 0m1.150s 00:06:17.952 sys 0m0.102s 00:06:17.952 10:00:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:17.952 10:00:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:17.952 ************************************ 00:06:17.952 END TEST accel_xor 00:06:17.952 ************************************ 00:06:17.952 10:00:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:17.952 10:00:03 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:17.952 10:00:03 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:17.952 10:00:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.952 ************************************ 00:06:17.952 START TEST accel_xor 00:06:17.952 ************************************ 00:06:17.952 10:00:03 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:17.952 [2024-05-15 10:00:03.435399] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:17.952 [2024-05-15 10:00:03.435484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603794 ] 00:06:17.952 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.952 [2024-05-15 10:00:03.495061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.952 [2024-05-15 10:00:03.525096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.952 10:00:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.897 10:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.897 10:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:18.898 10:00:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.898 00:06:18.898 real 0m1.234s 00:06:18.898 user 0m1.139s 00:06:18.898 sys 0m0.107s 00:06:18.898 10:00:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:18.898 10:00:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:18.898 ************************************ 00:06:18.898 END TEST accel_xor 00:06:18.898 ************************************ 00:06:18.898 10:00:04 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:18.898 10:00:04 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:18.898 10:00:04 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:18.898 10:00:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.159 ************************************ 00:06:19.159 START TEST accel_dif_verify 00:06:19.159 ************************************ 00:06:19.159 10:00:04 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:06:19.159 10:00:04 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:19.159 10:00:04 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:19.160 [2024-05-15 10:00:04.747454] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:19.160 [2024-05-15 10:00:04.747519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604080 ] 00:06:19.160 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.160 [2024-05-15 10:00:04.809897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.160 [2024-05-15 10:00:04.843425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.160 10:00:04 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:20.560 10:00:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.560 00:06:20.560 real 0m1.241s 00:06:20.560 user 0m1.147s 00:06:20.560 sys 0m0.107s 00:06:20.560 10:00:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:20.560 10:00:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:20.560 ************************************ 00:06:20.560 END TEST accel_dif_verify 00:06:20.560 ************************************ 00:06:20.560 10:00:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:20.560 10:00:05 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:20.560 10:00:05 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:20.560 10:00:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.560 ************************************ 00:06:20.560 START TEST accel_dif_generate 00:06:20.560 ************************************ 00:06:20.560 10:00:06 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:20.560 [2024-05-15 10:00:06.066859] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:20.560 [2024-05-15 10:00:06.066942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604429 ] 00:06:20.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.560 [2024-05-15 10:00:06.129525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.560 [2024-05-15 10:00:06.163322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:20.560 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:20.561 10:00:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:21.532 10:00:07 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.532 00:06:21.532 real 0m1.242s 00:06:21.532 user 0m1.157s 00:06:21.532 sys 0m0.098s 00:06:21.532 10:00:07 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:21.532 10:00:07 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:21.532 ************************************ 00:06:21.532 END TEST accel_dif_generate 00:06:21.532 ************************************ 00:06:21.532 10:00:07 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:21.532 10:00:07 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:21.532 10:00:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:21.532 10:00:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.794 ************************************ 00:06:21.794 START TEST accel_dif_generate_copy 00:06:21.794 ************************************ 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:21.794 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:21.795 [2024-05-15 10:00:07.389377] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:21.795 [2024-05-15 10:00:07.389470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604779 ] 00:06:21.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.795 [2024-05-15 10:00:07.449965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.795 [2024-05-15 10:00:07.480159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.795 10:00:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.182 00:06:23.182 real 0m1.238s 00:06:23.182 user 0m1.150s 00:06:23.182 sys 0m0.099s 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:23.182 10:00:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.182 ************************************ 00:06:23.182 END TEST accel_dif_generate_copy 00:06:23.182 ************************************ 00:06:23.182 10:00:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:23.182 10:00:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.182 10:00:08 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:06:23.182 10:00:08 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:23.182 10:00:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.182 ************************************ 00:06:23.182 START TEST accel_comp 00:06:23.182 ************************************ 00:06:23.182 10:00:08 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:23.182 10:00:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:23.183 [2024-05-15 10:00:08.702721] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:23.183 [2024-05-15 10:00:08.702803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605332 ] 00:06:23.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.183 [2024-05-15 10:00:08.765892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.183 [2024-05-15 10:00:08.798974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:23.183 10:00:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:24.125 10:00:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.125 00:06:24.125 real 0m1.239s 00:06:24.125 user 0m1.135s 00:06:24.125 sys 0m0.104s 00:06:24.125 10:00:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:24.125 10:00:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:24.125 ************************************ 00:06:24.125 END TEST accel_comp 00:06:24.125 ************************************ 00:06:24.385 10:00:09 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.385 10:00:09 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:06:24.385 10:00:09 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:24.385 10:00:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.385 ************************************ 00:06:24.385 START TEST accel_decomp 00:06:24.385 ************************************ 00:06:24.385 10:00:09 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.385 10:00:09 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:24.385 10:00:09 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:24.385 10:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.385 10:00:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.385 10:00:09 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:24.386 10:00:09 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:24.386 [2024-05-15 10:00:10.023810] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:24.386 [2024-05-15 10:00:10.023902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605649 ] 00:06:24.386 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.386 [2024-05-15 10:00:10.085901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.386 [2024-05-15 10:00:10.121690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.386 10:00:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.773 10:00:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.773 00:06:25.773 real 0m1.247s 00:06:25.773 user 0m1.153s 00:06:25.773 sys 0m0.106s 00:06:25.773 10:00:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:25.773 10:00:11 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:25.774 ************************************ 00:06:25.774 END TEST accel_decomp 00:06:25.774 ************************************ 00:06:25.774 10:00:11 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.774 10:00:11 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:25.774 10:00:11 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:25.774 10:00:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.774 ************************************ 00:06:25.774 START TEST accel_decmop_full 00:06:25.774 ************************************ 00:06:25.774 10:00:11 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:25.774 [2024-05-15 10:00:11.341820] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:25.774 [2024-05-15 10:00:11.341916] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2605979 ] 00:06:25.774 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.774 [2024-05-15 10:00:11.412576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.774 [2024-05-15 10:00:11.447512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.774 10:00:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:27.161 10:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:27.162 10:00:12 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.162 00:06:27.162 real 0m1.263s 00:06:27.162 user 0m1.161s 00:06:27.162 sys 0m0.114s 00:06:27.162 10:00:12 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:27.162 10:00:12 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:27.162 ************************************ 00:06:27.162 END TEST accel_decmop_full 00:06:27.162 ************************************ 00:06:27.162 10:00:12 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.162 10:00:12 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:27.162 10:00:12 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:27.162 10:00:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.162 ************************************ 00:06:27.162 START TEST accel_decomp_mcore 00:06:27.162 ************************************ 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:27.162 [2024-05-15 10:00:12.677477] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:27.162 [2024-05-15 10:00:12.677566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606326 ] 00:06:27.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.162 [2024-05-15 10:00:12.740773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.162 [2024-05-15 10:00:12.778791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.162 [2024-05-15 10:00:12.778910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.162 [2024-05-15 10:00:12.779067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.162 [2024-05-15 10:00:12.779068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.162 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.163 10:00:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.107 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.369 00:06:28.369 real 0m1.256s 00:06:28.369 user 0m4.393s 00:06:28.369 sys 0m0.113s 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:28.369 10:00:13 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 ************************************ 00:06:28.369 END TEST accel_decomp_mcore 00:06:28.369 ************************************ 00:06:28.369 10:00:13 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.369 10:00:13 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:28.369 10:00:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:28.369 10:00:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 ************************************ 00:06:28.369 START TEST accel_decomp_full_mcore 00:06:28.369 ************************************ 00:06:28.369 10:00:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.369 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:28.369 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:28.369 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:28.370 10:00:13 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:28.370 [2024-05-15 10:00:14.015164] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:28.370 [2024-05-15 10:00:14.015228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606687 ] 00:06:28.370 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.370 [2024-05-15 10:00:14.076624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.370 [2024-05-15 10:00:14.112715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.370 [2024-05-15 10:00:14.112831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.370 [2024-05-15 10:00:14.112987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.370 [2024-05-15 10:00:14.112988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.370 10:00:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.759 00:06:29.759 real 0m1.265s 00:06:29.759 user 0m4.448s 00:06:29.759 sys 0m0.107s 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:29.759 10:00:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:29.759 ************************************ 00:06:29.759 END TEST accel_decomp_full_mcore 00:06:29.759 ************************************ 00:06:29.759 10:00:15 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.759 10:00:15 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:06:29.759 10:00:15 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:29.759 10:00:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.759 ************************************ 00:06:29.759 START TEST accel_decomp_mthread 00:06:29.759 ************************************ 00:06:29.759 10:00:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.759 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:29.759 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:29.760 [2024-05-15 10:00:15.362647] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:29.760 [2024-05-15 10:00:15.362730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606875 ] 00:06:29.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.760 [2024-05-15 10:00:15.427441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.760 [2024-05-15 10:00:15.465202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.760 10:00:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.168 00:06:31.168 real 0m1.255s 00:06:31.168 user 0m1.161s 00:06:31.168 sys 0m0.107s 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:31.168 10:00:16 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:31.168 ************************************ 00:06:31.168 END TEST accel_decomp_mthread 00:06:31.168 ************************************ 00:06:31.168 10:00:16 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.168 10:00:16 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:06:31.168 10:00:16 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:31.168 10:00:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.168 ************************************ 00:06:31.168 START TEST accel_decomp_full_mthread 00:06:31.168 ************************************ 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:31.168 [2024-05-15 10:00:16.695922] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:31.168 [2024-05-15 10:00:16.695986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607072 ] 00:06:31.168 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.168 [2024-05-15 10:00:16.759204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.168 [2024-05-15 10:00:16.794904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.168 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.169 10:00:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.558 00:06:32.558 real 0m1.276s 00:06:32.558 user 0m1.187s 00:06:32.558 sys 0m0.101s 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:32.558 10:00:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:32.558 ************************************ 00:06:32.558 END TEST accel_decomp_full_mthread 00:06:32.558 ************************************ 00:06:32.558 10:00:17 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:32.558 10:00:17 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.558 10:00:17 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:32.558 10:00:17 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:32.558 10:00:17 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:32.558 10:00:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.558 10:00:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.558 10:00:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.558 10:00:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.558 10:00:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.558 10:00:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.558 10:00:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:32.558 10:00:17 accel -- accel/accel.sh@41 -- # jq -r . 00:06:32.558 ************************************ 00:06:32.558 START TEST accel_dif_functional_tests 00:06:32.558 ************************************ 00:06:32.558 10:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.558 [2024-05-15 10:00:18.081592] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:32.558 [2024-05-15 10:00:18.081647] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607427 ] 00:06:32.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.558 [2024-05-15 10:00:18.145088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.558 [2024-05-15 10:00:18.185139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.558 [2024-05-15 10:00:18.185261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.558 [2024-05-15 10:00:18.185263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.558 00:06:32.558 00:06:32.558 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.559 http://cunit.sourceforge.net/ 00:06:32.559 00:06:32.559 00:06:32.559 Suite: accel_dif 00:06:32.559 Test: verify: DIF generated, GUARD check ...passed 00:06:32.559 Test: verify: DIF generated, APPTAG check ...passed 00:06:32.559 Test: verify: DIF generated, REFTAG check ...passed 00:06:32.559 Test: verify: DIF not generated, GUARD check ...[2024-05-15 10:00:18.236201] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.559 [2024-05-15 10:00:18.236246] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.559 passed 00:06:32.559 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 10:00:18.236278] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.559 [2024-05-15 10:00:18.236296] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.559 passed 00:06:32.559 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 10:00:18.236312] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.559 [2024-05-15 10:00:18.236327] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.559 passed 00:06:32.559 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:32.559 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 10:00:18.236372] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:32.559 passed 00:06:32.559 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:32.559 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:32.559 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:32.559 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 10:00:18.236487] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:32.559 passed 00:06:32.559 Test: generate copy: DIF generated, GUARD check ...passed 00:06:32.559 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:32.559 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:32.559 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:32.559 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:32.559 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:32.559 Test: generate copy: iovecs-len validate ...[2024-05-15 10:00:18.236680] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:32.559 passed 00:06:32.559 Test: generate copy: buffer alignment validate ...passed 00:06:32.559 00:06:32.559 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.559 suites 1 1 n/a 0 0 00:06:32.559 tests 20 20 20 0 0 00:06:32.559 asserts 204 204 204 0 n/a 00:06:32.559 00:06:32.559 Elapsed time = 0.002 seconds 00:06:32.559 00:06:32.559 real 0m0.312s 00:06:32.559 user 0m0.388s 00:06:32.559 sys 0m0.132s 00:06:32.559 10:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:32.559 10:00:18 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:32.559 ************************************ 00:06:32.559 END TEST accel_dif_functional_tests 00:06:32.559 ************************************ 00:06:32.822 00:06:32.822 real 0m29.019s 00:06:32.822 user 0m32.599s 00:06:32.822 sys 0m4.098s 00:06:32.822 10:00:18 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:32.822 10:00:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.822 ************************************ 00:06:32.822 END TEST accel 00:06:32.822 ************************************ 00:06:32.822 10:00:18 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:32.822 10:00:18 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:32.822 10:00:18 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:32.822 10:00:18 -- common/autotest_common.sh@10 -- # set +x 00:06:32.822 ************************************ 00:06:32.822 START TEST accel_rpc 00:06:32.822 ************************************ 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:32.822 * Looking for test storage... 00:06:32.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:32.822 10:00:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.822 10:00:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2607548 00:06:32.822 10:00:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2607548 00:06:32.822 10:00:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 2607548 ']' 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:32.822 10:00:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.822 [2024-05-15 10:00:18.612256] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:32.822 [2024-05-15 10:00:18.612337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607548 ] 00:06:33.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.084 [2024-05-15 10:00:18.676092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.084 [2024-05-15 10:00:18.715864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.655 10:00:19 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:33.655 10:00:19 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:33.655 10:00:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:33.655 10:00:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:33.655 10:00:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:33.655 10:00:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:33.655 10:00:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:33.655 10:00:19 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:33.655 10:00:19 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:33.655 10:00:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.655 ************************************ 00:06:33.655 START TEST accel_assign_opcode 00:06:33.655 ************************************ 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.655 [2024-05-15 10:00:19.417934] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.655 [2024-05-15 10:00:19.429961] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.655 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.917 software 00:06:33.917 00:06:33.917 real 0m0.196s 00:06:33.917 user 0m0.048s 00:06:33.917 sys 0m0.011s 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:33.917 10:00:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.917 ************************************ 00:06:33.917 END TEST accel_assign_opcode 00:06:33.917 ************************************ 00:06:33.917 10:00:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2607548 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 2607548 ']' 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 2607548 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2607548 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2607548' 00:06:33.917 killing process with pid 2607548 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@966 -- # kill 2607548 00:06:33.917 10:00:19 accel_rpc -- common/autotest_common.sh@971 -- # wait 2607548 00:06:34.179 00:06:34.179 real 0m1.435s 00:06:34.179 user 0m1.515s 00:06:34.179 sys 0m0.408s 00:06:34.179 10:00:19 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:34.179 10:00:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.179 ************************************ 00:06:34.179 END TEST accel_rpc 00:06:34.179 ************************************ 00:06:34.179 10:00:19 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.179 10:00:19 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:34.179 10:00:19 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:34.179 10:00:19 -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 ************************************ 00:06:34.442 START TEST app_cmdline 00:06:34.442 ************************************ 00:06:34.442 10:00:19 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.442 * Looking for test storage... 00:06:34.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:34.442 10:00:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.442 10:00:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2607904 00:06:34.442 10:00:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2607904 00:06:34.442 10:00:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.442 10:00:20 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 2607904 ']' 00:06:34.442 10:00:20 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.442 10:00:20 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:34.442 10:00:20 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.442 10:00:20 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:34.442 10:00:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.442 [2024-05-15 10:00:20.137070] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:06:34.442 [2024-05-15 10:00:20.137148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2607904 ] 00:06:34.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.442 [2024-05-15 10:00:20.202861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.704 [2024-05-15 10:00:20.240621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.277 10:00:20 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:35.277 10:00:20 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:06:35.277 10:00:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:35.277 { 00:06:35.277 "version": "SPDK v24.05-pre git sha1 4506c0c36", 00:06:35.277 "fields": { 00:06:35.277 "major": 24, 00:06:35.277 "minor": 5, 00:06:35.277 "patch": 0, 00:06:35.277 "suffix": "-pre", 00:06:35.277 "commit": "4506c0c36" 00:06:35.277 } 00:06:35.277 } 00:06:35.537 10:00:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.537 10:00:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.538 request: 00:06:35.538 { 00:06:35.538 "method": "env_dpdk_get_mem_stats", 00:06:35.538 "req_id": 1 00:06:35.538 } 00:06:35.538 Got JSON-RPC error response 00:06:35.538 response: 00:06:35.538 { 00:06:35.538 "code": -32601, 00:06:35.538 "message": "Method not found" 00:06:35.538 } 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:35.538 10:00:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2607904 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 2607904 ']' 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 2607904 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:35.538 10:00:21 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2607904 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2607904' 00:06:35.799 killing process with pid 2607904 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@966 -- # kill 2607904 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@971 -- # wait 2607904 00:06:35.799 00:06:35.799 real 0m1.567s 00:06:35.799 user 0m1.916s 00:06:35.799 sys 0m0.393s 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:35.799 10:00:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.799 ************************************ 00:06:35.799 END TEST app_cmdline 00:06:35.799 ************************************ 00:06:35.799 10:00:21 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:35.799 10:00:21 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:35.799 10:00:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:35.799 10:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.061 ************************************ 00:06:36.061 START TEST version 00:06:36.061 ************************************ 00:06:36.061 10:00:21 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.061 * Looking for test storage... 00:06:36.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.061 10:00:21 version -- app/version.sh@17 -- # get_header_version major 00:06:36.061 10:00:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.061 10:00:21 version -- app/version.sh@17 -- # major=24 00:06:36.061 10:00:21 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.061 10:00:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.061 10:00:21 version -- app/version.sh@18 -- # minor=5 00:06:36.061 10:00:21 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.061 10:00:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.061 10:00:21 version -- app/version.sh@19 -- # patch=0 00:06:36.061 10:00:21 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.061 10:00:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # cut -f2 00:06:36.061 10:00:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.061 10:00:21 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.061 10:00:21 version -- app/version.sh@22 -- # version=24.5 00:06:36.061 10:00:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.061 10:00:21 version -- app/version.sh@28 -- # version=24.5rc0 00:06:36.061 10:00:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.061 10:00:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.061 10:00:21 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:36.061 10:00:21 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:36.061 00:06:36.061 real 0m0.170s 00:06:36.061 user 0m0.089s 00:06:36.061 sys 0m0.118s 00:06:36.061 10:00:21 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:36.061 10:00:21 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.061 ************************************ 00:06:36.061 END TEST version 00:06:36.061 ************************************ 00:06:36.061 10:00:21 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:36.061 10:00:21 -- spdk/autotest.sh@194 -- # uname -s 00:06:36.061 10:00:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:36.061 10:00:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.061 10:00:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:36.061 10:00:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:36.061 10:00:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:36.061 10:00:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:36.061 10:00:21 -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:36.061 10:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.323 10:00:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:36.323 10:00:21 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:36.323 10:00:21 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:36.323 10:00:21 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:36.323 10:00:21 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:36.323 10:00:21 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:36.323 10:00:21 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.323 10:00:21 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:36.323 10:00:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.323 10:00:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.323 ************************************ 00:06:36.323 START TEST nvmf_tcp 00:06:36.323 ************************************ 00:06:36.323 10:00:21 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.323 * Looking for test storage... 00:06:36.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.323 10:00:22 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.324 10:00:22 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.324 10:00:22 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.324 10:00:22 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.324 10:00:22 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.324 10:00:22 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.324 10:00:22 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.324 10:00:22 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:36.324 10:00:22 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:36.324 10:00:22 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:36.324 10:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:36.324 10:00:22 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.324 10:00:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:36.324 10:00:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:36.324 10:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.324 ************************************ 00:06:36.324 START TEST nvmf_example 00:06:36.324 ************************************ 00:06:36.324 10:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:36.587 * Looking for test storage... 00:06:36.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.587 10:00:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:44.794 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:44.794 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.794 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:44.795 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:44.795 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:44.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:06:44.795 00:06:44.795 --- 10.0.0.2 ping statistics --- 00:06:44.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.795 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:06:44.795 00:06:44.795 --- 10.0.0.1 ping statistics --- 00:06:44.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.795 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2612321 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2612321 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 2612321 ']' 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:44.795 10:00:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:44.795 10:00:30 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:45.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.058 Initializing NVMe Controllers 00:06:55.058 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.058 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:55.058 Initialization complete. Launching workers. 00:06:55.058 ======================================================== 00:06:55.058 Latency(us) 00:06:55.058 Device Information : IOPS MiB/s Average min max 00:06:55.058 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13737.42 53.66 4658.53 900.36 15428.31 00:06:55.058 ======================================================== 00:06:55.058 Total : 13737.42 53.66 4658.53 900.36 15428.31 00:06:55.058 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:55.058 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:55.058 rmmod nvme_tcp 00:06:55.058 rmmod nvme_fabrics 00:06:55.058 rmmod nvme_keyring 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2612321 ']' 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2612321 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 2612321 ']' 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 2612321 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2612321 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2612321' 00:06:55.321 killing process with pid 2612321 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 2612321 00:06:55.321 10:00:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 2612321 00:06:55.321 nvmf threads initialize successfully 00:06:55.321 bdev subsystem init successfully 00:06:55.321 created a nvmf target service 00:06:55.321 create targets's poll groups done 00:06:55.321 all subsystems of target started 00:06:55.321 nvmf target is running 00:06:55.321 all subsystems of target stopped 00:06:55.321 destroy targets's poll groups done 00:06:55.321 destroyed the nvmf target service 00:06:55.321 bdev subsystem finish successfully 00:06:55.321 nvmf threads destroy successfully 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.321 10:00:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.876 10:00:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:57.876 10:00:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:57.876 10:00:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:57.876 10:00:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 00:06:57.876 real 0m21.073s 00:06:57.876 user 0m46.533s 00:06:57.876 sys 0m6.554s 00:06:57.876 10:00:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:57.876 10:00:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 ************************************ 00:06:57.876 END TEST nvmf_example 00:06:57.876 ************************************ 00:06:57.876 10:00:43 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:57.876 10:00:43 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:06:57.876 10:00:43 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:57.876 10:00:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.876 ************************************ 00:06:57.876 START TEST nvmf_filesystem 00:06:57.876 ************************************ 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:57.876 * Looking for test storage... 00:06:57.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:57.876 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:57.877 #define SPDK_CONFIG_H 00:06:57.877 #define SPDK_CONFIG_APPS 1 00:06:57.877 #define SPDK_CONFIG_ARCH native 00:06:57.877 #undef SPDK_CONFIG_ASAN 00:06:57.877 #undef SPDK_CONFIG_AVAHI 00:06:57.877 #undef SPDK_CONFIG_CET 00:06:57.877 #define SPDK_CONFIG_COVERAGE 1 00:06:57.877 #define SPDK_CONFIG_CROSS_PREFIX 00:06:57.877 #undef SPDK_CONFIG_CRYPTO 00:06:57.877 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:57.877 #undef SPDK_CONFIG_CUSTOMOCF 00:06:57.877 #undef SPDK_CONFIG_DAOS 00:06:57.877 #define SPDK_CONFIG_DAOS_DIR 00:06:57.877 #define SPDK_CONFIG_DEBUG 1 00:06:57.877 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:57.877 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:57.877 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:06:57.877 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:57.877 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:57.877 #undef SPDK_CONFIG_DPDK_UADK 00:06:57.877 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:57.877 #define SPDK_CONFIG_EXAMPLES 1 00:06:57.877 #undef SPDK_CONFIG_FC 00:06:57.877 #define SPDK_CONFIG_FC_PATH 00:06:57.877 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:57.877 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:57.877 #undef SPDK_CONFIG_FUSE 00:06:57.877 #undef SPDK_CONFIG_FUZZER 00:06:57.877 #define SPDK_CONFIG_FUZZER_LIB 00:06:57.877 #undef SPDK_CONFIG_GOLANG 00:06:57.877 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:57.877 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:57.877 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:57.877 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:57.877 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:57.877 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:57.877 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:57.877 #define SPDK_CONFIG_IDXD 1 00:06:57.877 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:57.877 #undef SPDK_CONFIG_IPSEC_MB 00:06:57.877 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:57.877 #define SPDK_CONFIG_ISAL 1 00:06:57.877 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:57.877 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:57.877 #define SPDK_CONFIG_LIBDIR 00:06:57.877 #undef SPDK_CONFIG_LTO 00:06:57.877 #define SPDK_CONFIG_MAX_LCORES 00:06:57.877 #define SPDK_CONFIG_NVME_CUSE 1 00:06:57.877 #undef SPDK_CONFIG_OCF 00:06:57.877 #define SPDK_CONFIG_OCF_PATH 00:06:57.877 #define SPDK_CONFIG_OPENSSL_PATH 00:06:57.877 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:57.877 #define SPDK_CONFIG_PGO_DIR 00:06:57.877 #undef SPDK_CONFIG_PGO_USE 00:06:57.877 #define SPDK_CONFIG_PREFIX /usr/local 00:06:57.877 #undef SPDK_CONFIG_RAID5F 00:06:57.877 #undef SPDK_CONFIG_RBD 00:06:57.877 #define SPDK_CONFIG_RDMA 1 00:06:57.877 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:57.877 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:57.877 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:57.877 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:57.877 #define SPDK_CONFIG_SHARED 1 00:06:57.877 #undef SPDK_CONFIG_SMA 00:06:57.877 #define SPDK_CONFIG_TESTS 1 00:06:57.877 #undef SPDK_CONFIG_TSAN 00:06:57.877 #define SPDK_CONFIG_UBLK 1 00:06:57.877 #define SPDK_CONFIG_UBSAN 1 00:06:57.877 #undef SPDK_CONFIG_UNIT_TESTS 00:06:57.877 #undef SPDK_CONFIG_URING 00:06:57.877 #define SPDK_CONFIG_URING_PATH 00:06:57.877 #undef SPDK_CONFIG_URING_ZNS 00:06:57.877 #undef SPDK_CONFIG_USDT 00:06:57.877 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:57.877 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:57.877 #define SPDK_CONFIG_VFIO_USER 1 00:06:57.877 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:57.877 #define SPDK_CONFIG_VHOST 1 00:06:57.877 #define SPDK_CONFIG_VIRTIO 1 00:06:57.877 #undef SPDK_CONFIG_VTUNE 00:06:57.877 #define SPDK_CONFIG_VTUNE_DIR 00:06:57.877 #define SPDK_CONFIG_WERROR 1 00:06:57.877 #define SPDK_CONFIG_WPDK_DIR 00:06:57.877 #undef SPDK_CONFIG_XNVME 00:06:57.877 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:57.877 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:57.878 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2615124 ]] 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2615124 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.gVpJBp 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.gVpJBp/tests/target /tmp/spdk.gVpJBp 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:57.879 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=967749632 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4316680192 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=114181292032 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370943488 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=15189651456 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64629833728 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685469696 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864491008 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874190336 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=234496 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=269312 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683057152 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685473792 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=2416640 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937089024 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937093120 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:57.880 * Looking for test storage... 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=114181292032 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=17404243968 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.880 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:57.881 10:00:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.024 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.024 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:06.024 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:06.024 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:06.024 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:06.024 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:06.025 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:06.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:06.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:06.025 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:06.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:07:06.025 00:07:06.025 --- 10.0.0.2 ping statistics --- 00:07:06.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.025 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.524 ms 00:07:06.025 00:07:06.025 --- 10.0.0.1 ping statistics --- 00:07:06.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.025 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.025 ************************************ 00:07:06.025 START TEST nvmf_filesystem_no_in_capsule 00:07:06.025 ************************************ 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2618753 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2618753 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2618753 ']' 00:07:06.025 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.026 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:06.026 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.026 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:06.026 10:00:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.026 [2024-05-15 10:00:51.015234] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:06.026 [2024-05-15 10:00:51.015288] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.026 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.026 [2024-05-15 10:00:51.086316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.026 [2024-05-15 10:00:51.126612] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.026 [2024-05-15 10:00:51.126655] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.026 [2024-05-15 10:00:51.126665] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.026 [2024-05-15 10:00:51.126672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.026 [2024-05-15 10:00:51.126679] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.026 [2024-05-15 10:00:51.126820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.026 [2024-05-15 10:00:51.126946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.026 [2024-05-15 10:00:51.127104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.026 [2024-05-15 10:00:51.127106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.026 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:06.026 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:06.026 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.026 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:06.026 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.293 [2024-05-15 10:00:51.829968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.293 Malloc1 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.293 [2024-05-15 10:00:51.957509] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:06.293 [2024-05-15 10:00:51.957780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:06.293 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:06.294 { 00:07:06.294 "name": "Malloc1", 00:07:06.294 "aliases": [ 00:07:06.294 "52b5c9d9-b9e1-4edb-b81c-64fe77d00613" 00:07:06.294 ], 00:07:06.294 "product_name": "Malloc disk", 00:07:06.294 "block_size": 512, 00:07:06.294 "num_blocks": 1048576, 00:07:06.294 "uuid": "52b5c9d9-b9e1-4edb-b81c-64fe77d00613", 00:07:06.294 "assigned_rate_limits": { 00:07:06.294 "rw_ios_per_sec": 0, 00:07:06.294 "rw_mbytes_per_sec": 0, 00:07:06.294 "r_mbytes_per_sec": 0, 00:07:06.294 "w_mbytes_per_sec": 0 00:07:06.294 }, 00:07:06.294 "claimed": true, 00:07:06.294 "claim_type": "exclusive_write", 00:07:06.294 "zoned": false, 00:07:06.294 "supported_io_types": { 00:07:06.294 "read": true, 00:07:06.294 "write": true, 00:07:06.294 "unmap": true, 00:07:06.294 "write_zeroes": true, 00:07:06.294 "flush": true, 00:07:06.294 "reset": true, 00:07:06.294 "compare": false, 00:07:06.294 "compare_and_write": false, 00:07:06.294 "abort": true, 00:07:06.294 "nvme_admin": false, 00:07:06.294 "nvme_io": false 00:07:06.294 }, 00:07:06.294 "memory_domains": [ 00:07:06.294 { 00:07:06.294 "dma_device_id": "system", 00:07:06.294 "dma_device_type": 1 00:07:06.294 }, 00:07:06.294 { 00:07:06.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.294 "dma_device_type": 2 00:07:06.294 } 00:07:06.294 ], 00:07:06.294 "driver_specific": {} 00:07:06.294 } 00:07:06.294 ]' 00:07:06.294 10:00:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:06.294 10:00:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:08.226 10:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:08.226 10:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:08.226 10:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:08.226 10:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:08.226 10:00:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:10.149 10:00:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:10.411 10:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:11.025 10:00:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:11.968 ************************************ 00:07:11.968 START TEST filesystem_ext4 00:07:11.968 ************************************ 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:11.968 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.968 mke2fs 1.46.5 (30-Dec-2021) 00:07:11.968 Discarding device blocks: 0/522240 done 00:07:11.968 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:11.968 Filesystem UUID: 689e3ae0-9a7b-490b-8698-3a9edf9f7843 00:07:11.968 Superblock backups stored on blocks: 00:07:11.968 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:11.968 00:07:11.968 Allocating group tables: 0/64 done 00:07:11.968 Writing inode tables: 0/64 done 00:07:12.229 Creating journal (8192 blocks): done 00:07:12.229 Writing superblocks and filesystem accounting information: 0/64 done 00:07:12.229 00:07:12.229 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:12.229 10:00:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2618753 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.174 00:07:13.174 real 0m1.228s 00:07:13.174 user 0m0.026s 00:07:13.174 sys 0m0.070s 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:13.174 ************************************ 00:07:13.174 END TEST filesystem_ext4 00:07:13.174 ************************************ 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:13.174 10:00:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:13.435 ************************************ 00:07:13.435 START TEST filesystem_btrfs 00:07:13.435 ************************************ 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:13.435 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:14.007 btrfs-progs v6.6.2 00:07:14.007 See https://btrfs.readthedocs.io for more information. 00:07:14.007 00:07:14.007 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:14.007 NOTE: several default settings have changed in version 5.15, please make sure 00:07:14.007 this does not affect your deployments: 00:07:14.007 - DUP for metadata (-m dup) 00:07:14.007 - enabled no-holes (-O no-holes) 00:07:14.007 - enabled free-space-tree (-R free-space-tree) 00:07:14.007 00:07:14.007 Label: (null) 00:07:14.007 UUID: ca69145e-50c7-485a-98c3-0b15997533e3 00:07:14.007 Node size: 16384 00:07:14.007 Sector size: 4096 00:07:14.007 Filesystem size: 510.00MiB 00:07:14.007 Block group profiles: 00:07:14.007 Data: single 8.00MiB 00:07:14.007 Metadata: DUP 32.00MiB 00:07:14.007 System: DUP 8.00MiB 00:07:14.007 SSD detected: yes 00:07:14.007 Zoned device: no 00:07:14.007 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:14.007 Runtime features: free-space-tree 00:07:14.007 Checksum: crc32c 00:07:14.007 Number of devices: 1 00:07:14.007 Devices: 00:07:14.007 ID SIZE PATH 00:07:14.007 1 510.00MiB /dev/nvme0n1p1 00:07:14.007 00:07:14.007 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:14.007 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:14.007 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:14.007 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2618753 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:14.269 00:07:14.269 real 0m0.865s 00:07:14.269 user 0m0.023s 00:07:14.269 sys 0m0.137s 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:14.269 ************************************ 00:07:14.269 END TEST filesystem_btrfs 00:07:14.269 ************************************ 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.269 ************************************ 00:07:14.269 START TEST filesystem_xfs 00:07:14.269 ************************************ 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:14.269 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:14.270 10:00:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:14.270 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:14.270 = sectsz=512 attr=2, projid32bit=1 00:07:14.270 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:14.270 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:14.270 data = bsize=4096 blocks=130560, imaxpct=25 00:07:14.270 = sunit=0 swidth=0 blks 00:07:14.270 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:14.270 log =internal log bsize=4096 blocks=16384, version=2 00:07:14.270 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:14.270 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:15.214 Discarding blocks...Done. 00:07:15.214 10:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:15.214 10:01:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2618753 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:17.148 00:07:17.148 real 0m2.766s 00:07:17.148 user 0m0.028s 00:07:17.148 sys 0m0.076s 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:17.148 ************************************ 00:07:17.148 END TEST filesystem_xfs 00:07:17.148 ************************************ 00:07:17.148 10:01:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:17.410 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:17.410 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.671 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2618753 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2618753 ']' 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2618753 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2618753 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2618753' 00:07:17.672 killing process with pid 2618753 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 2618753 00:07:17.672 [2024-05-15 10:01:03.324711] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:17.672 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 2618753 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:17.933 00:07:17.933 real 0m12.595s 00:07:17.933 user 0m49.698s 00:07:17.933 sys 0m1.264s 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.933 ************************************ 00:07:17.933 END TEST nvmf_filesystem_no_in_capsule 00:07:17.933 ************************************ 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:17.933 ************************************ 00:07:17.933 START TEST nvmf_filesystem_in_capsule 00:07:17.933 ************************************ 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2621625 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2621625 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 2621625 ']' 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:17.933 10:01:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:17.933 [2024-05-15 10:01:03.685797] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:17.933 [2024-05-15 10:01:03.685843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.195 [2024-05-15 10:01:03.749344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.195 [2024-05-15 10:01:03.780839] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.195 [2024-05-15 10:01:03.780875] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.195 [2024-05-15 10:01:03.780883] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.195 [2024-05-15 10:01:03.780889] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.195 [2024-05-15 10:01:03.780895] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.195 [2024-05-15 10:01:03.781032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.195 [2024-05-15 10:01:03.781155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.195 [2024-05-15 10:01:03.781387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.195 [2024-05-15 10:01:03.781547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:18.767 [2024-05-15 10:01:04.510028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:18.767 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.028 Malloc1 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.028 [2024-05-15 10:01:04.630312] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:19.028 [2024-05-15 10:01:04.630572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:07:19.028 { 00:07:19.028 "name": "Malloc1", 00:07:19.028 "aliases": [ 00:07:19.028 "f1e9734d-01c0-4f46-9531-7c89e22e5fcc" 00:07:19.028 ], 00:07:19.028 "product_name": "Malloc disk", 00:07:19.028 "block_size": 512, 00:07:19.028 "num_blocks": 1048576, 00:07:19.028 "uuid": "f1e9734d-01c0-4f46-9531-7c89e22e5fcc", 00:07:19.028 "assigned_rate_limits": { 00:07:19.028 "rw_ios_per_sec": 0, 00:07:19.028 "rw_mbytes_per_sec": 0, 00:07:19.028 "r_mbytes_per_sec": 0, 00:07:19.028 "w_mbytes_per_sec": 0 00:07:19.028 }, 00:07:19.028 "claimed": true, 00:07:19.028 "claim_type": "exclusive_write", 00:07:19.028 "zoned": false, 00:07:19.028 "supported_io_types": { 00:07:19.028 "read": true, 00:07:19.028 "write": true, 00:07:19.028 "unmap": true, 00:07:19.028 "write_zeroes": true, 00:07:19.028 "flush": true, 00:07:19.028 "reset": true, 00:07:19.028 "compare": false, 00:07:19.028 "compare_and_write": false, 00:07:19.028 "abort": true, 00:07:19.028 "nvme_admin": false, 00:07:19.028 "nvme_io": false 00:07:19.028 }, 00:07:19.028 "memory_domains": [ 00:07:19.028 { 00:07:19.028 "dma_device_id": "system", 00:07:19.028 "dma_device_type": 1 00:07:19.028 }, 00:07:19.028 { 00:07:19.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.028 "dma_device_type": 2 00:07:19.028 } 00:07:19.028 ], 00:07:19.028 "driver_specific": {} 00:07:19.028 } 00:07:19.028 ]' 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:19.028 10:01:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.941 10:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.941 10:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:07:20.941 10:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.941 10:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:07:20.941 10:01:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:22.863 10:01:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:24.249 ************************************ 00:07:24.249 START TEST filesystem_in_capsule_ext4 00:07:24.249 ************************************ 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:24.249 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:07:24.250 10:01:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:24.250 mke2fs 1.46.5 (30-Dec-2021) 00:07:24.250 Discarding device blocks: 0/522240 done 00:07:24.250 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:24.250 Filesystem UUID: d03c8990-ba73-454b-8a5d-4b115d4266f5 00:07:24.250 Superblock backups stored on blocks: 00:07:24.250 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:24.250 00:07:24.250 Allocating group tables: 0/64 done 00:07:24.250 Writing inode tables: 0/64 done 00:07:24.250 Creating journal (8192 blocks): done 00:07:25.191 Writing superblocks and filesystem accounting information: 0/64 done 00:07:25.191 00:07:25.191 10:01:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:07:25.191 10:01:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:25.450 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:25.450 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:25.450 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:25.450 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:25.450 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:25.450 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:25.710 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2621625 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:25.711 00:07:25.711 real 0m1.622s 00:07:25.711 user 0m0.026s 00:07:25.711 sys 0m0.070s 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:25.711 ************************************ 00:07:25.711 END TEST filesystem_in_capsule_ext4 00:07:25.711 ************************************ 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.711 ************************************ 00:07:25.711 START TEST filesystem_in_capsule_btrfs 00:07:25.711 ************************************ 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:25.711 btrfs-progs v6.6.2 00:07:25.711 See https://btrfs.readthedocs.io for more information. 00:07:25.711 00:07:25.711 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:25.711 NOTE: several default settings have changed in version 5.15, please make sure 00:07:25.711 this does not affect your deployments: 00:07:25.711 - DUP for metadata (-m dup) 00:07:25.711 - enabled no-holes (-O no-holes) 00:07:25.711 - enabled free-space-tree (-R free-space-tree) 00:07:25.711 00:07:25.711 Label: (null) 00:07:25.711 UUID: 67064874-0563-481e-97da-697cfa02b162 00:07:25.711 Node size: 16384 00:07:25.711 Sector size: 4096 00:07:25.711 Filesystem size: 510.00MiB 00:07:25.711 Block group profiles: 00:07:25.711 Data: single 8.00MiB 00:07:25.711 Metadata: DUP 32.00MiB 00:07:25.711 System: DUP 8.00MiB 00:07:25.711 SSD detected: yes 00:07:25.711 Zoned device: no 00:07:25.711 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:25.711 Runtime features: free-space-tree 00:07:25.711 Checksum: crc32c 00:07:25.711 Number of devices: 1 00:07:25.711 Devices: 00:07:25.711 ID SIZE PATH 00:07:25.711 1 510.00MiB /dev/nvme0n1p1 00:07:25.711 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:07:25.711 10:01:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.283 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.283 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2621625 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.545 00:07:26.545 real 0m0.799s 00:07:26.545 user 0m0.024s 00:07:26.545 sys 0m0.136s 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:26.545 ************************************ 00:07:26.545 END TEST filesystem_in_capsule_btrfs 00:07:26.545 ************************************ 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.545 ************************************ 00:07:26.545 START TEST filesystem_in_capsule_xfs 00:07:26.545 ************************************ 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:07:26.545 10:01:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:26.545 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:26.545 = sectsz=512 attr=2, projid32bit=1 00:07:26.545 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:26.545 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:26.545 data = bsize=4096 blocks=130560, imaxpct=25 00:07:26.545 = sunit=0 swidth=0 blks 00:07:26.545 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:26.545 log =internal log bsize=4096 blocks=16384, version=2 00:07:26.545 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:26.545 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:27.969 Discarding blocks...Done. 00:07:27.969 10:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:07:27.969 10:01:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2621625 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.557 00:07:30.557 real 0m3.716s 00:07:30.557 user 0m0.021s 00:07:30.557 sys 0m0.082s 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:30.557 ************************************ 00:07:30.557 END TEST filesystem_in_capsule_xfs 00:07:30.557 ************************************ 00:07:30.557 10:01:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:30.558 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:30.558 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:30.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2621625 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 2621625 ']' 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 2621625 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2621625 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2621625' 00:07:30.819 killing process with pid 2621625 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 2621625 00:07:30.819 [2024-05-15 10:01:16.518584] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:30.819 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 2621625 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:31.080 00:07:31.080 real 0m13.108s 00:07:31.080 user 0m51.827s 00:07:31.080 sys 0m1.239s 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.080 ************************************ 00:07:31.080 END TEST nvmf_filesystem_in_capsule 00:07:31.080 ************************************ 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.080 rmmod nvme_tcp 00:07:31.080 rmmod nvme_fabrics 00:07:31.080 rmmod nvme_keyring 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.080 10:01:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.630 10:01:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.630 00:07:33.630 real 0m35.690s 00:07:33.630 user 1m43.740s 00:07:33.630 sys 0m8.184s 00:07:33.630 10:01:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:33.630 10:01:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:33.630 ************************************ 00:07:33.630 END TEST nvmf_filesystem 00:07:33.630 ************************************ 00:07:33.630 10:01:18 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.630 10:01:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:33.630 10:01:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:33.630 10:01:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:33.630 ************************************ 00:07:33.630 START TEST nvmf_target_discovery 00:07:33.630 ************************************ 00:07:33.630 10:01:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.630 * Looking for test storage... 00:07:33.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.630 10:01:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.630 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:33.630 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.630 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.630 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.631 10:01:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:40.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:40.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:40.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:40.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.235 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:40.497 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:40.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.811 ms 00:07:40.759 00:07:40.759 --- 10.0.0.2 ping statistics --- 00:07:40.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.759 rtt min/avg/max/mdev = 0.811/0.811/0.811/0.000 ms 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:07:40.759 00:07:40.759 --- 10.0.0.1 ping statistics --- 00:07:40.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.759 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2628559 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2628559 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 2628559 ']' 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:40.759 10:01:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:40.759 [2024-05-15 10:01:26.472404] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:40.759 [2024-05-15 10:01:26.472469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.759 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.759 [2024-05-15 10:01:26.543675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.026 [2024-05-15 10:01:26.583895] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.026 [2024-05-15 10:01:26.583940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.026 [2024-05-15 10:01:26.583947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.026 [2024-05-15 10:01:26.583954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.026 [2024-05-15 10:01:26.583960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.026 [2024-05-15 10:01:26.584105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.026 [2024-05-15 10:01:26.584221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.026 [2024-05-15 10:01:26.584384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.026 [2024-05-15 10:01:26.584383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 [2024-05-15 10:01:27.289945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 Null1 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 [2024-05-15 10:01:27.350072] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:41.602 [2024-05-15 10:01:27.350284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 Null2 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.602 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 Null3 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 Null4 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:41.865 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:07:41.865 00:07:41.865 Discovery Log Number of Records 6, Generation counter 6 00:07:41.865 =====Discovery Log Entry 0====== 00:07:41.865 trtype: tcp 00:07:41.865 adrfam: ipv4 00:07:41.865 subtype: current discovery subsystem 00:07:41.865 treq: not required 00:07:41.865 portid: 0 00:07:41.865 trsvcid: 4420 00:07:41.865 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:41.865 traddr: 10.0.0.2 00:07:41.865 eflags: explicit discovery connections, duplicate discovery information 00:07:41.865 sectype: none 00:07:41.865 =====Discovery Log Entry 1====== 00:07:41.865 trtype: tcp 00:07:41.865 adrfam: ipv4 00:07:41.865 subtype: nvme subsystem 00:07:41.865 treq: not required 00:07:41.865 portid: 0 00:07:41.865 trsvcid: 4420 00:07:41.865 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:41.865 traddr: 10.0.0.2 00:07:41.865 eflags: none 00:07:41.865 sectype: none 00:07:41.865 =====Discovery Log Entry 2====== 00:07:41.865 trtype: tcp 00:07:41.865 adrfam: ipv4 00:07:41.865 subtype: nvme subsystem 00:07:41.865 treq: not required 00:07:41.865 portid: 0 00:07:41.865 trsvcid: 4420 00:07:41.865 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:41.865 traddr: 10.0.0.2 00:07:41.865 eflags: none 00:07:41.865 sectype: none 00:07:41.866 =====Discovery Log Entry 3====== 00:07:41.866 trtype: tcp 00:07:41.866 adrfam: ipv4 00:07:41.866 subtype: nvme subsystem 00:07:41.866 treq: not required 00:07:41.866 portid: 0 00:07:41.866 trsvcid: 4420 00:07:41.866 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:41.866 traddr: 10.0.0.2 00:07:41.866 eflags: none 00:07:41.866 sectype: none 00:07:41.866 =====Discovery Log Entry 4====== 00:07:41.866 trtype: tcp 00:07:41.866 adrfam: ipv4 00:07:41.866 subtype: nvme subsystem 00:07:41.866 treq: not required 00:07:41.866 portid: 0 00:07:41.866 trsvcid: 4420 00:07:41.866 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:41.866 traddr: 10.0.0.2 00:07:41.866 eflags: none 00:07:41.866 sectype: none 00:07:41.866 =====Discovery Log Entry 5====== 00:07:41.866 trtype: tcp 00:07:41.866 adrfam: ipv4 00:07:41.866 subtype: discovery subsystem referral 00:07:41.866 treq: not required 00:07:41.866 portid: 0 00:07:41.866 trsvcid: 4430 00:07:41.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:41.866 traddr: 10.0.0.2 00:07:41.866 eflags: none 00:07:41.866 sectype: none 00:07:41.866 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:41.866 Perform nvmf subsystem discovery via RPC 00:07:42.128 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:42.128 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.128 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.128 [ 00:07:42.128 { 00:07:42.128 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:42.128 "subtype": "Discovery", 00:07:42.128 "listen_addresses": [ 00:07:42.128 { 00:07:42.128 "trtype": "TCP", 00:07:42.128 "adrfam": "IPv4", 00:07:42.128 "traddr": "10.0.0.2", 00:07:42.128 "trsvcid": "4420" 00:07:42.128 } 00:07:42.128 ], 00:07:42.128 "allow_any_host": true, 00:07:42.128 "hosts": [] 00:07:42.128 }, 00:07:42.128 { 00:07:42.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:42.128 "subtype": "NVMe", 00:07:42.128 "listen_addresses": [ 00:07:42.128 { 00:07:42.128 "trtype": "TCP", 00:07:42.128 "adrfam": "IPv4", 00:07:42.128 "traddr": "10.0.0.2", 00:07:42.128 "trsvcid": "4420" 00:07:42.128 } 00:07:42.128 ], 00:07:42.128 "allow_any_host": true, 00:07:42.128 "hosts": [], 00:07:42.128 "serial_number": "SPDK00000000000001", 00:07:42.128 "model_number": "SPDK bdev Controller", 00:07:42.128 "max_namespaces": 32, 00:07:42.128 "min_cntlid": 1, 00:07:42.128 "max_cntlid": 65519, 00:07:42.128 "namespaces": [ 00:07:42.128 { 00:07:42.128 "nsid": 1, 00:07:42.128 "bdev_name": "Null1", 00:07:42.128 "name": "Null1", 00:07:42.128 "nguid": "94B954A6892B4FD6B0C0E8DE0398CBC4", 00:07:42.128 "uuid": "94b954a6-892b-4fd6-b0c0-e8de0398cbc4" 00:07:42.128 } 00:07:42.128 ] 00:07:42.128 }, 00:07:42.128 { 00:07:42.128 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:42.128 "subtype": "NVMe", 00:07:42.128 "listen_addresses": [ 00:07:42.128 { 00:07:42.128 "trtype": "TCP", 00:07:42.128 "adrfam": "IPv4", 00:07:42.128 "traddr": "10.0.0.2", 00:07:42.128 "trsvcid": "4420" 00:07:42.128 } 00:07:42.128 ], 00:07:42.128 "allow_any_host": true, 00:07:42.128 "hosts": [], 00:07:42.128 "serial_number": "SPDK00000000000002", 00:07:42.128 "model_number": "SPDK bdev Controller", 00:07:42.128 "max_namespaces": 32, 00:07:42.128 "min_cntlid": 1, 00:07:42.128 "max_cntlid": 65519, 00:07:42.128 "namespaces": [ 00:07:42.128 { 00:07:42.128 "nsid": 1, 00:07:42.128 "bdev_name": "Null2", 00:07:42.128 "name": "Null2", 00:07:42.128 "nguid": "8B676E5FCD624257855F2D1726EB194E", 00:07:42.128 "uuid": "8b676e5f-cd62-4257-855f-2d1726eb194e" 00:07:42.128 } 00:07:42.128 ] 00:07:42.128 }, 00:07:42.128 { 00:07:42.128 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:42.128 "subtype": "NVMe", 00:07:42.128 "listen_addresses": [ 00:07:42.128 { 00:07:42.128 "trtype": "TCP", 00:07:42.128 "adrfam": "IPv4", 00:07:42.128 "traddr": "10.0.0.2", 00:07:42.128 "trsvcid": "4420" 00:07:42.128 } 00:07:42.128 ], 00:07:42.128 "allow_any_host": true, 00:07:42.128 "hosts": [], 00:07:42.128 "serial_number": "SPDK00000000000003", 00:07:42.128 "model_number": "SPDK bdev Controller", 00:07:42.128 "max_namespaces": 32, 00:07:42.128 "min_cntlid": 1, 00:07:42.128 "max_cntlid": 65519, 00:07:42.129 "namespaces": [ 00:07:42.129 { 00:07:42.129 "nsid": 1, 00:07:42.129 "bdev_name": "Null3", 00:07:42.129 "name": "Null3", 00:07:42.129 "nguid": "28ADE56C8F8049A8A1A30C882FCE384A", 00:07:42.129 "uuid": "28ade56c-8f80-49a8-a1a3-0c882fce384a" 00:07:42.129 } 00:07:42.129 ] 00:07:42.129 }, 00:07:42.129 { 00:07:42.129 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:42.129 "subtype": "NVMe", 00:07:42.129 "listen_addresses": [ 00:07:42.129 { 00:07:42.129 "trtype": "TCP", 00:07:42.129 "adrfam": "IPv4", 00:07:42.129 "traddr": "10.0.0.2", 00:07:42.129 "trsvcid": "4420" 00:07:42.129 } 00:07:42.129 ], 00:07:42.129 "allow_any_host": true, 00:07:42.129 "hosts": [], 00:07:42.129 "serial_number": "SPDK00000000000004", 00:07:42.129 "model_number": "SPDK bdev Controller", 00:07:42.129 "max_namespaces": 32, 00:07:42.129 "min_cntlid": 1, 00:07:42.129 "max_cntlid": 65519, 00:07:42.129 "namespaces": [ 00:07:42.129 { 00:07:42.129 "nsid": 1, 00:07:42.129 "bdev_name": "Null4", 00:07:42.129 "name": "Null4", 00:07:42.129 "nguid": "B4E4C8512B054721BAA7184A53CE0D27", 00:07:42.129 "uuid": "b4e4c851-2b05-4721-baa7-184a53ce0d27" 00:07:42.129 } 00:07:42.129 ] 00:07:42.129 } 00:07:42.129 ] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.129 rmmod nvme_tcp 00:07:42.129 rmmod nvme_fabrics 00:07:42.129 rmmod nvme_keyring 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2628559 ']' 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2628559 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 2628559 ']' 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 2628559 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:42.129 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2628559 00:07:42.392 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:42.392 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:42.392 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2628559' 00:07:42.392 killing process with pid 2628559 00:07:42.392 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 2628559 00:07:42.392 [2024-05-15 10:01:27.953930] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:42.392 10:01:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 2628559 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.392 10:01:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.947 10:01:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.947 00:07:44.947 real 0m11.156s 00:07:44.947 user 0m8.149s 00:07:44.947 sys 0m5.804s 00:07:44.947 10:01:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:44.947 10:01:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:44.947 ************************************ 00:07:44.947 END TEST nvmf_target_discovery 00:07:44.947 ************************************ 00:07:44.947 10:01:30 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:44.947 10:01:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:44.947 10:01:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:44.947 10:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:44.947 ************************************ 00:07:44.947 START TEST nvmf_referrals 00:07:44.947 ************************************ 00:07:44.947 10:01:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:44.947 * Looking for test storage... 00:07:44.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.947 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.947 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:44.947 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.948 10:01:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:51.548 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:51.548 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:51.548 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:51.548 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.548 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:07:51.811 00:07:51.811 --- 10.0.0.2 ping statistics --- 00:07:51.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.811 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:07:51.811 00:07:51.811 --- 10.0.0.1 ping statistics --- 00:07:51.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.811 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.811 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2632932 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2632932 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 2632932 ']' 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:52.072 10:01:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.072 [2024-05-15 10:01:37.697289] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:07:52.072 [2024-05-15 10:01:37.697351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.072 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.072 [2024-05-15 10:01:37.762594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.072 [2024-05-15 10:01:37.793634] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.072 [2024-05-15 10:01:37.793673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.072 [2024-05-15 10:01:37.793685] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.072 [2024-05-15 10:01:37.793692] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.072 [2024-05-15 10:01:37.793697] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.072 [2024-05-15 10:01:37.793839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.072 [2024-05-15 10:01:37.793954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.072 [2024-05-15 10:01:37.794109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.072 [2024-05-15 10:01:37.794110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.687 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:52.687 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:07:52.687 10:01:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.687 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:07:52.687 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.960 10:01:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.960 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.960 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 [2024-05-15 10:01:38.514985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 [2024-05-15 10:01:38.530977] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:52.961 [2024-05-15 10:01:38.531205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.961 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.256 10:01:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.528 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:53.529 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.529 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.529 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.529 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.529 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.790 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.052 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:54.313 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:54.313 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:54.313 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:54.313 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:54.313 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.313 10:01:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:54.313 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:54.313 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:54.314 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.575 rmmod nvme_tcp 00:07:54.575 rmmod nvme_fabrics 00:07:54.575 rmmod nvme_keyring 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2632932 ']' 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2632932 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 2632932 ']' 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 2632932 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2632932 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2632932' 00:07:54.575 killing process with pid 2632932 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 2632932 00:07:54.575 [2024-05-15 10:01:40.288693] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:54.575 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 2632932 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.836 10:01:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.752 10:01:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.752 00:07:56.752 real 0m12.255s 00:07:56.752 user 0m13.399s 00:07:56.752 sys 0m6.079s 00:07:56.752 10:01:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:56.752 10:01:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.752 ************************************ 00:07:56.752 END TEST nvmf_referrals 00:07:56.752 ************************************ 00:07:56.752 10:01:42 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:56.752 10:01:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:56.752 10:01:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:56.752 10:01:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.014 ************************************ 00:07:57.014 START TEST nvmf_connect_disconnect 00:07:57.014 ************************************ 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:57.014 * Looking for test storage... 00:07:57.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.014 10:01:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:03.611 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:03.611 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:03.611 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:03.611 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.611 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.612 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:08:03.874 00:08:03.874 --- 10.0.0.2 ping statistics --- 00:08:03.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.874 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:08:03.874 00:08:03.874 --- 10.0.0.1 ping statistics --- 00:08:03.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.874 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2637690 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2637690 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 2637690 ']' 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.874 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.874 [2024-05-15 10:01:49.575752] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:08:03.874 [2024-05-15 10:01:49.575802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.874 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.874 [2024-05-15 10:01:49.635169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.874 [2024-05-15 10:01:49.668850] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.874 [2024-05-15 10:01:49.668888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.874 [2024-05-15 10:01:49.668896] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.874 [2024-05-15 10:01:49.668903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.874 [2024-05-15 10:01:49.668908] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.874 [2024-05-15 10:01:49.669051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.874 [2024-05-15 10:01:49.669171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.137 [2024-05-15 10:01:49.669338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.137 [2024-05-15 10:01:49.669338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.137 [2024-05-15 10:01:49.803100] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:04.137 [2024-05-15 10:01:49.862216] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:04.137 [2024-05-15 10:01:49.862451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:04.137 10:01:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:06.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.516 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.467 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:56.963 rmmod nvme_tcp 00:11:56.963 rmmod nvme_fabrics 00:11:56.963 rmmod nvme_keyring 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2637690 ']' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2637690 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 2637690 ']' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 2637690 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2637690 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2637690' 00:11:56.963 killing process with pid 2637690 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 2637690 00:11:56.963 [2024-05-15 10:05:42.322377] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 2637690 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.963 10:05:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.899 10:05:44 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.899 00:11:58.899 real 4m1.956s 00:11:58.899 user 15m25.255s 00:11:58.899 sys 0m21.219s 00:11:58.899 10:05:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:58.899 10:05:44 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 ************************************ 00:11:58.899 END TEST nvmf_connect_disconnect 00:11:58.899 ************************************ 00:11:58.899 10:05:44 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:58.899 10:05:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:58.899 10:05:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:58.899 10:05:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:58.899 ************************************ 00:11:58.899 START TEST nvmf_multitarget 00:11:58.899 ************************************ 00:11:58.899 10:05:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:59.162 * Looking for test storage... 00:11:59.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:59.162 10:05:44 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:59.163 10:05:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.826 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:05.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:05.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:05.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:05.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.748 ms 00:12:05.827 00:12:05.827 --- 10.0.0.2 ping statistics --- 00:12:05.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.827 rtt min/avg/max/mdev = 0.748/0.748/0.748/0.000 ms 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.488 ms 00:12:05.827 00:12:05.827 --- 10.0.0.1 ping statistics --- 00:12:05.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.827 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2689262 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2689262 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 2689262 ']' 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:05.827 10:05:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:05.827 [2024-05-15 10:05:51.579806] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:12:05.827 [2024-05-15 10:05:51.579875] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.827 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.089 [2024-05-15 10:05:51.650756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.089 [2024-05-15 10:05:51.691569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.089 [2024-05-15 10:05:51.691617] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.089 [2024-05-15 10:05:51.691624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.089 [2024-05-15 10:05:51.691631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.089 [2024-05-15 10:05:51.691637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.089 [2024-05-15 10:05:51.691778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.089 [2024-05-15 10:05:51.691899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.089 [2024-05-15 10:05:51.692058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.089 [2024-05-15 10:05:51.692059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.663 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:06.925 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:06.925 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:06.925 "nvmf_tgt_1" 00:12:06.925 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:06.925 "nvmf_tgt_2" 00:12:06.925 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.925 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:07.186 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:07.186 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:07.186 true 00:12:07.186 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:07.186 true 00:12:07.449 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:07.449 10:05:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.449 rmmod nvme_tcp 00:12:07.449 rmmod nvme_fabrics 00:12:07.449 rmmod nvme_keyring 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2689262 ']' 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2689262 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 2689262 ']' 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 2689262 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2689262 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2689262' 00:12:07.449 killing process with pid 2689262 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 2689262 00:12:07.449 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 2689262 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.711 10:05:53 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.632 10:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.632 00:12:09.632 real 0m10.798s 00:12:09.632 user 0m9.204s 00:12:09.632 sys 0m5.479s 00:12:09.632 10:05:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:09.632 10:05:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.632 ************************************ 00:12:09.632 END TEST nvmf_multitarget 00:12:09.632 ************************************ 00:12:09.895 10:05:55 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.895 10:05:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:09.895 10:05:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:09.895 10:05:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.895 ************************************ 00:12:09.895 START TEST nvmf_rpc 00:12:09.895 ************************************ 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.895 * Looking for test storage... 00:12:09.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.895 10:05:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.055 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.055 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:18.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:18.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:18.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:18.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:12:18.056 00:12:18.056 --- 10.0.0.2 ping statistics --- 00:12:18.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.056 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:12:18.056 00:12:18.056 --- 10.0.0.1 ping statistics --- 00:12:18.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.056 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2693836 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2693836 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 2693836 ']' 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:18.056 10:06:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.056 [2024-05-15 10:06:02.986318] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:12:18.056 [2024-05-15 10:06:02.986380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.056 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.056 [2024-05-15 10:06:03.056861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.056 [2024-05-15 10:06:03.096382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.056 [2024-05-15 10:06:03.096426] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.057 [2024-05-15 10:06:03.096434] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.057 [2024-05-15 10:06:03.096441] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.057 [2024-05-15 10:06:03.096447] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.057 [2024-05-15 10:06:03.096538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.057 [2024-05-15 10:06:03.096662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.057 [2024-05-15 10:06:03.096818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.057 [2024-05-15 10:06:03.096819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:18.057 "tick_rate": 2400000000, 00:12:18.057 "poll_groups": [ 00:12:18.057 { 00:12:18.057 "name": "nvmf_tgt_poll_group_000", 00:12:18.057 "admin_qpairs": 0, 00:12:18.057 "io_qpairs": 0, 00:12:18.057 "current_admin_qpairs": 0, 00:12:18.057 "current_io_qpairs": 0, 00:12:18.057 "pending_bdev_io": 0, 00:12:18.057 "completed_nvme_io": 0, 00:12:18.057 "transports": [] 00:12:18.057 }, 00:12:18.057 { 00:12:18.057 "name": "nvmf_tgt_poll_group_001", 00:12:18.057 "admin_qpairs": 0, 00:12:18.057 "io_qpairs": 0, 00:12:18.057 "current_admin_qpairs": 0, 00:12:18.057 "current_io_qpairs": 0, 00:12:18.057 "pending_bdev_io": 0, 00:12:18.057 "completed_nvme_io": 0, 00:12:18.057 "transports": [] 00:12:18.057 }, 00:12:18.057 { 00:12:18.057 "name": "nvmf_tgt_poll_group_002", 00:12:18.057 "admin_qpairs": 0, 00:12:18.057 "io_qpairs": 0, 00:12:18.057 "current_admin_qpairs": 0, 00:12:18.057 "current_io_qpairs": 0, 00:12:18.057 "pending_bdev_io": 0, 00:12:18.057 "completed_nvme_io": 0, 00:12:18.057 "transports": [] 00:12:18.057 }, 00:12:18.057 { 00:12:18.057 "name": "nvmf_tgt_poll_group_003", 00:12:18.057 "admin_qpairs": 0, 00:12:18.057 "io_qpairs": 0, 00:12:18.057 "current_admin_qpairs": 0, 00:12:18.057 "current_io_qpairs": 0, 00:12:18.057 "pending_bdev_io": 0, 00:12:18.057 "completed_nvme_io": 0, 00:12:18.057 "transports": [] 00:12:18.057 } 00:12:18.057 ] 00:12:18.057 }' 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:18.057 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 [2024-05-15 10:06:03.935363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:18.318 "tick_rate": 2400000000, 00:12:18.318 "poll_groups": [ 00:12:18.318 { 00:12:18.318 "name": "nvmf_tgt_poll_group_000", 00:12:18.318 "admin_qpairs": 0, 00:12:18.318 "io_qpairs": 0, 00:12:18.318 "current_admin_qpairs": 0, 00:12:18.318 "current_io_qpairs": 0, 00:12:18.318 "pending_bdev_io": 0, 00:12:18.318 "completed_nvme_io": 0, 00:12:18.318 "transports": [ 00:12:18.318 { 00:12:18.318 "trtype": "TCP" 00:12:18.318 } 00:12:18.318 ] 00:12:18.318 }, 00:12:18.318 { 00:12:18.318 "name": "nvmf_tgt_poll_group_001", 00:12:18.318 "admin_qpairs": 0, 00:12:18.318 "io_qpairs": 0, 00:12:18.318 "current_admin_qpairs": 0, 00:12:18.318 "current_io_qpairs": 0, 00:12:18.318 "pending_bdev_io": 0, 00:12:18.318 "completed_nvme_io": 0, 00:12:18.318 "transports": [ 00:12:18.318 { 00:12:18.318 "trtype": "TCP" 00:12:18.318 } 00:12:18.318 ] 00:12:18.318 }, 00:12:18.318 { 00:12:18.318 "name": "nvmf_tgt_poll_group_002", 00:12:18.318 "admin_qpairs": 0, 00:12:18.318 "io_qpairs": 0, 00:12:18.318 "current_admin_qpairs": 0, 00:12:18.318 "current_io_qpairs": 0, 00:12:18.318 "pending_bdev_io": 0, 00:12:18.318 "completed_nvme_io": 0, 00:12:18.318 "transports": [ 00:12:18.318 { 00:12:18.318 "trtype": "TCP" 00:12:18.318 } 00:12:18.318 ] 00:12:18.318 }, 00:12:18.318 { 00:12:18.318 "name": "nvmf_tgt_poll_group_003", 00:12:18.318 "admin_qpairs": 0, 00:12:18.318 "io_qpairs": 0, 00:12:18.318 "current_admin_qpairs": 0, 00:12:18.318 "current_io_qpairs": 0, 00:12:18.318 "pending_bdev_io": 0, 00:12:18.318 "completed_nvme_io": 0, 00:12:18.318 "transports": [ 00:12:18.318 { 00:12:18.318 "trtype": "TCP" 00:12:18.318 } 00:12:18.318 ] 00:12:18.318 } 00:12:18.318 ] 00:12:18.318 }' 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:18.318 10:06:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 Malloc1 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.318 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.579 [2024-05-15 10:06:04.126959] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:18.579 [2024-05-15 10:06:04.127181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:18.579 [2024-05-15 10:06:04.154160] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:18.579 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:18.579 could not add new controller: failed to write to nvme-fabrics device 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:18.579 10:06:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.497 10:06:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.497 10:06:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:20.497 10:06:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.497 10:06:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:20.497 10:06:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:22.415 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:22.415 [2024-05-15 10:06:07.929700] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:22.416 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:22.416 could not add new controller: failed to write to nvme-fabrics device 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:22.416 10:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.805 10:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.805 10:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:23.805 10:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.805 10:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:23.805 10:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:25.726 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.988 [2024-05-15 10:06:11.663358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:25.988 10:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.907 10:06:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.907 10:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:27.907 10:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.907 10:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:27.907 10:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.871 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.872 [2024-05-15 10:06:15.405570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:29.872 10:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.262 10:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.262 10:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:31.262 10:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.262 10:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:31.262 10:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:33.811 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 [2024-05-15 10:06:19.217993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:33.812 10:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.202 10:06:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.202 10:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:35.202 10:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.202 10:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:35.202 10:06:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.119 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.120 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.120 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.120 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.120 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.120 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.120 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 [2024-05-15 10:06:22.916000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:37.382 10:06:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.772 10:06:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.772 10:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:38.772 10:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.772 10:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:38.772 10:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:40.690 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 [2024-05-15 10:06:26.651082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:40.952 10:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.875 10:06:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.875 10:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:12:42.875 10:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.875 10:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:42.875 10:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:12:44.797 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 [2024-05-15 10:06:30.440298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 [2024-05-15 10:06:30.504445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 [2024-05-15 10:06:30.564622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:44.798 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 [2024-05-15 10:06:30.624797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 [2024-05-15 10:06:30.684987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:45.061 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:45.061 "tick_rate": 2400000000, 00:12:45.061 "poll_groups": [ 00:12:45.061 { 00:12:45.061 "name": "nvmf_tgt_poll_group_000", 00:12:45.061 "admin_qpairs": 0, 00:12:45.061 "io_qpairs": 224, 00:12:45.061 "current_admin_qpairs": 0, 00:12:45.061 "current_io_qpairs": 0, 00:12:45.061 "pending_bdev_io": 0, 00:12:45.061 "completed_nvme_io": 521, 00:12:45.061 "transports": [ 00:12:45.061 { 00:12:45.061 "trtype": "TCP" 00:12:45.061 } 00:12:45.061 ] 00:12:45.061 }, 00:12:45.061 { 00:12:45.061 "name": "nvmf_tgt_poll_group_001", 00:12:45.061 "admin_qpairs": 1, 00:12:45.061 "io_qpairs": 223, 00:12:45.061 "current_admin_qpairs": 0, 00:12:45.061 "current_io_qpairs": 0, 00:12:45.061 "pending_bdev_io": 0, 00:12:45.061 "completed_nvme_io": 274, 00:12:45.061 "transports": [ 00:12:45.061 { 00:12:45.061 "trtype": "TCP" 00:12:45.061 } 00:12:45.061 ] 00:12:45.061 }, 00:12:45.061 { 00:12:45.061 "name": "nvmf_tgt_poll_group_002", 00:12:45.061 "admin_qpairs": 6, 00:12:45.061 "io_qpairs": 218, 00:12:45.061 "current_admin_qpairs": 0, 00:12:45.061 "current_io_qpairs": 0, 00:12:45.061 "pending_bdev_io": 0, 00:12:45.061 "completed_nvme_io": 220, 00:12:45.061 "transports": [ 00:12:45.061 { 00:12:45.061 "trtype": "TCP" 00:12:45.061 } 00:12:45.061 ] 00:12:45.061 }, 00:12:45.061 { 00:12:45.061 "name": "nvmf_tgt_poll_group_003", 00:12:45.061 "admin_qpairs": 0, 00:12:45.061 "io_qpairs": 224, 00:12:45.062 "current_admin_qpairs": 0, 00:12:45.062 "current_io_qpairs": 0, 00:12:45.062 "pending_bdev_io": 0, 00:12:45.062 "completed_nvme_io": 224, 00:12:45.062 "transports": [ 00:12:45.062 { 00:12:45.062 "trtype": "TCP" 00:12:45.062 } 00:12:45.062 ] 00:12:45.062 } 00:12:45.062 ] 00:12:45.062 }' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:45.062 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:45.062 rmmod nvme_tcp 00:12:45.324 rmmod nvme_fabrics 00:12:45.324 rmmod nvme_keyring 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2693836 ']' 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2693836 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 2693836 ']' 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 2693836 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2693836 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2693836' 00:12:45.324 killing process with pid 2693836 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 2693836 00:12:45.324 [2024-05-15 10:06:30.958634] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:45.324 10:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 2693836 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.324 10:06:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.879 10:06:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:47.879 00:12:47.879 real 0m37.658s 00:12:47.879 user 1m53.812s 00:12:47.879 sys 0m7.347s 00:12:47.879 10:06:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:47.879 10:06:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.879 ************************************ 00:12:47.879 END TEST nvmf_rpc 00:12:47.879 ************************************ 00:12:47.879 10:06:33 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:47.879 10:06:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:47.879 10:06:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:47.879 10:06:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:47.879 ************************************ 00:12:47.879 START TEST nvmf_invalid 00:12:47.879 ************************************ 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:47.879 * Looking for test storage... 00:12:47.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.879 10:06:33 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.880 10:06:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:54.482 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:54.482 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:54.482 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:54.482 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.482 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.743 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.727 ms 00:12:54.743 00:12:54.743 --- 10.0.0.2 ping statistics --- 00:12:54.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.743 rtt min/avg/max/mdev = 0.727/0.727/0.727/0.000 ms 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:12:55.004 00:12:55.004 --- 10.0.0.1 ping statistics --- 00:12:55.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.004 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2704145 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2704145 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 2704145 ']' 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:55.004 10:06:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.004 [2024-05-15 10:06:40.643501] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:12:55.004 [2024-05-15 10:06:40.643553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.004 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.004 [2024-05-15 10:06:40.712332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.004 [2024-05-15 10:06:40.746579] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.004 [2024-05-15 10:06:40.746620] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.004 [2024-05-15 10:06:40.746628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.004 [2024-05-15 10:06:40.746634] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.004 [2024-05-15 10:06:40.746640] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.004 [2024-05-15 10:06:40.746779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.004 [2024-05-15 10:06:40.746896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.004 [2024-05-15 10:06:40.747055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.004 [2024-05-15 10:06:40.747056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.638 10:06:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:55.638 10:06:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:12:55.638 10:06:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.638 10:06:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:55.638 10:06:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:55.900 10:06:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.901 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:55.901 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12430 00:12:55.901 [2024-05-15 10:06:41.609374] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:55.901 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:55.901 { 00:12:55.901 "nqn": "nqn.2016-06.io.spdk:cnode12430", 00:12:55.901 "tgt_name": "foobar", 00:12:55.901 "method": "nvmf_create_subsystem", 00:12:55.901 "req_id": 1 00:12:55.901 } 00:12:55.901 Got JSON-RPC error response 00:12:55.901 response: 00:12:55.901 { 00:12:55.901 "code": -32603, 00:12:55.901 "message": "Unable to find target foobar" 00:12:55.901 }' 00:12:55.901 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:55.901 { 00:12:55.901 "nqn": "nqn.2016-06.io.spdk:cnode12430", 00:12:55.901 "tgt_name": "foobar", 00:12:55.901 "method": "nvmf_create_subsystem", 00:12:55.901 "req_id": 1 00:12:55.901 } 00:12:55.901 Got JSON-RPC error response 00:12:55.901 response: 00:12:55.901 { 00:12:55.901 "code": -32603, 00:12:55.901 "message": "Unable to find target foobar" 00:12:55.901 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:55.901 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:55.901 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11237 00:12:56.162 [2024-05-15 10:06:41.785994] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11237: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:56.162 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:56.162 { 00:12:56.162 "nqn": "nqn.2016-06.io.spdk:cnode11237", 00:12:56.162 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:56.162 "method": "nvmf_create_subsystem", 00:12:56.162 "req_id": 1 00:12:56.162 } 00:12:56.162 Got JSON-RPC error response 00:12:56.162 response: 00:12:56.162 { 00:12:56.162 "code": -32602, 00:12:56.162 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:56.162 }' 00:12:56.162 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:56.162 { 00:12:56.162 "nqn": "nqn.2016-06.io.spdk:cnode11237", 00:12:56.162 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:56.162 "method": "nvmf_create_subsystem", 00:12:56.162 "req_id": 1 00:12:56.162 } 00:12:56.162 Got JSON-RPC error response 00:12:56.162 response: 00:12:56.162 { 00:12:56.162 "code": -32602, 00:12:56.162 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:56.162 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:56.162 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:56.162 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2410 00:12:56.425 [2024-05-15 10:06:41.962509] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2410: invalid model number 'SPDK_Controller' 00:12:56.425 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:56.425 { 00:12:56.425 "nqn": "nqn.2016-06.io.spdk:cnode2410", 00:12:56.425 "model_number": "SPDK_Controller\u001f", 00:12:56.425 "method": "nvmf_create_subsystem", 00:12:56.425 "req_id": 1 00:12:56.425 } 00:12:56.425 Got JSON-RPC error response 00:12:56.425 response: 00:12:56.425 { 00:12:56.425 "code": -32602, 00:12:56.425 "message": "Invalid MN SPDK_Controller\u001f" 00:12:56.425 }' 00:12:56.425 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:56.425 { 00:12:56.425 "nqn": "nqn.2016-06.io.spdk:cnode2410", 00:12:56.425 "model_number": "SPDK_Controller\u001f", 00:12:56.425 "method": "nvmf_create_subsystem", 00:12:56.425 "req_id": 1 00:12:56.425 } 00:12:56.425 Got JSON-RPC error response 00:12:56.425 response: 00:12:56.425 { 00:12:56.425 "code": -32602, 00:12:56.425 "message": "Invalid MN SPDK_Controller\u001f" 00:12:56.425 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:56.425 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:56.425 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:56.425 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:56.425 10:06:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '+~p8uf~+M|GV6BXE@]O(G' 00:12:56.425 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '+~p8uf~+M|GV6BXE@]O(G' nqn.2016-06.io.spdk:cnode6092 00:12:56.688 [2024-05-15 10:06:42.299597] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6092: invalid serial number '+~p8uf~+M|GV6BXE@]O(G' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:56.688 { 00:12:56.688 "nqn": "nqn.2016-06.io.spdk:cnode6092", 00:12:56.688 "serial_number": "+~p8uf~+M|GV6BXE@]O(G", 00:12:56.688 "method": "nvmf_create_subsystem", 00:12:56.688 "req_id": 1 00:12:56.688 } 00:12:56.688 Got JSON-RPC error response 00:12:56.688 response: 00:12:56.688 { 00:12:56.688 "code": -32602, 00:12:56.688 "message": "Invalid SN +~p8uf~+M|GV6BXE@]O(G" 00:12:56.688 }' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:56.688 { 00:12:56.688 "nqn": "nqn.2016-06.io.spdk:cnode6092", 00:12:56.688 "serial_number": "+~p8uf~+M|GV6BXE@]O(G", 00:12:56.688 "method": "nvmf_create_subsystem", 00:12:56.688 "req_id": 1 00:12:56.688 } 00:12:56.688 Got JSON-RPC error response 00:12:56.688 response: 00:12:56.688 { 00:12:56.688 "code": -32602, 00:12:56.688 "message": "Invalid SN +~p8uf~+M|GV6BXE@]O(G" 00:12:56.688 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.688 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.689 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '/-ZUzISnBKP{g2W:Q+'\''N>fv9baQ4E[gmdVdnW4rL ' 00:12:56.951 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '/-ZUzISnBKP{g2W:Q+'\''N>fv9baQ4E[gmdVdnW4rL ' nqn.2016-06.io.spdk:cnode12806 00:12:57.213 [2024-05-15 10:06:42.785151] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12806: invalid model number '/-ZUzISnBKP{g2W:Q+'N>fv9baQ4E[gmdVdnW4rL ' 00:12:57.213 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:57.213 { 00:12:57.213 "nqn": "nqn.2016-06.io.spdk:cnode12806", 00:12:57.213 "model_number": "/-ZUzISnBKP{g2W:Q+'\''N>fv9baQ4E[gmdVdnW4rL ", 00:12:57.213 "method": "nvmf_create_subsystem", 00:12:57.213 "req_id": 1 00:12:57.213 } 00:12:57.213 Got JSON-RPC error response 00:12:57.213 response: 00:12:57.213 { 00:12:57.213 "code": -32602, 00:12:57.213 "message": "Invalid MN /-ZUzISnBKP{g2W:Q+'\''N>fv9baQ4E[gmdVdnW4rL " 00:12:57.213 }' 00:12:57.213 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:57.213 { 00:12:57.213 "nqn": "nqn.2016-06.io.spdk:cnode12806", 00:12:57.213 "model_number": "/-ZUzISnBKP{g2W:Q+'N>fv9baQ4E[gmdVdnW4rL ", 00:12:57.213 "method": "nvmf_create_subsystem", 00:12:57.213 "req_id": 1 00:12:57.213 } 00:12:57.213 Got JSON-RPC error response 00:12:57.213 response: 00:12:57.213 { 00:12:57.213 "code": -32602, 00:12:57.213 "message": "Invalid MN /-ZUzISnBKP{g2W:Q+'N>fv9baQ4E[gmdVdnW4rL " 00:12:57.213 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:57.213 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:57.213 [2024-05-15 10:06:42.957791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.213 10:06:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:57.475 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:57.475 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:57.475 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:57.475 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:57.475 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:57.736 [2024-05-15 10:06:43.314864] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:57.736 [2024-05-15 10:06:43.314927] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:57.736 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:57.736 { 00:12:57.736 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:57.736 "listen_address": { 00:12:57.736 "trtype": "tcp", 00:12:57.736 "traddr": "", 00:12:57.736 "trsvcid": "4421" 00:12:57.736 }, 00:12:57.736 "method": "nvmf_subsystem_remove_listener", 00:12:57.736 "req_id": 1 00:12:57.736 } 00:12:57.736 Got JSON-RPC error response 00:12:57.736 response: 00:12:57.736 { 00:12:57.736 "code": -32602, 00:12:57.736 "message": "Invalid parameters" 00:12:57.736 }' 00:12:57.736 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:57.736 { 00:12:57.736 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:57.736 "listen_address": { 00:12:57.736 "trtype": "tcp", 00:12:57.736 "traddr": "", 00:12:57.736 "trsvcid": "4421" 00:12:57.736 }, 00:12:57.736 "method": "nvmf_subsystem_remove_listener", 00:12:57.736 "req_id": 1 00:12:57.736 } 00:12:57.736 Got JSON-RPC error response 00:12:57.736 response: 00:12:57.736 { 00:12:57.736 "code": -32602, 00:12:57.736 "message": "Invalid parameters" 00:12:57.736 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:57.737 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24642 -i 0 00:12:57.737 [2024-05-15 10:06:43.491448] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24642: invalid cntlid range [0-65519] 00:12:57.737 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:57.737 { 00:12:57.737 "nqn": "nqn.2016-06.io.spdk:cnode24642", 00:12:57.737 "min_cntlid": 0, 00:12:57.737 "method": "nvmf_create_subsystem", 00:12:57.737 "req_id": 1 00:12:57.737 } 00:12:57.737 Got JSON-RPC error response 00:12:57.737 response: 00:12:57.737 { 00:12:57.737 "code": -32602, 00:12:57.737 "message": "Invalid cntlid range [0-65519]" 00:12:57.737 }' 00:12:57.737 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:57.737 { 00:12:57.737 "nqn": "nqn.2016-06.io.spdk:cnode24642", 00:12:57.737 "min_cntlid": 0, 00:12:57.737 "method": "nvmf_create_subsystem", 00:12:57.737 "req_id": 1 00:12:57.737 } 00:12:57.737 Got JSON-RPC error response 00:12:57.737 response: 00:12:57.737 { 00:12:57.737 "code": -32602, 00:12:57.737 "message": "Invalid cntlid range [0-65519]" 00:12:57.737 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.737 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26871 -i 65520 00:12:57.998 [2024-05-15 10:06:43.667991] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26871: invalid cntlid range [65520-65519] 00:12:57.998 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:57.998 { 00:12:57.998 "nqn": "nqn.2016-06.io.spdk:cnode26871", 00:12:57.998 "min_cntlid": 65520, 00:12:57.998 "method": "nvmf_create_subsystem", 00:12:57.998 "req_id": 1 00:12:57.998 } 00:12:57.998 Got JSON-RPC error response 00:12:57.998 response: 00:12:57.998 { 00:12:57.998 "code": -32602, 00:12:57.998 "message": "Invalid cntlid range [65520-65519]" 00:12:57.998 }' 00:12:57.998 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:57.998 { 00:12:57.998 "nqn": "nqn.2016-06.io.spdk:cnode26871", 00:12:57.998 "min_cntlid": 65520, 00:12:57.998 "method": "nvmf_create_subsystem", 00:12:57.998 "req_id": 1 00:12:57.998 } 00:12:57.998 Got JSON-RPC error response 00:12:57.998 response: 00:12:57.998 { 00:12:57.998 "code": -32602, 00:12:57.998 "message": "Invalid cntlid range [65520-65519]" 00:12:57.998 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.998 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode163 -I 0 00:12:58.259 [2024-05-15 10:06:43.844557] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode163: invalid cntlid range [1-0] 00:12:58.259 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:58.259 { 00:12:58.259 "nqn": "nqn.2016-06.io.spdk:cnode163", 00:12:58.259 "max_cntlid": 0, 00:12:58.259 "method": "nvmf_create_subsystem", 00:12:58.259 "req_id": 1 00:12:58.259 } 00:12:58.259 Got JSON-RPC error response 00:12:58.259 response: 00:12:58.259 { 00:12:58.259 "code": -32602, 00:12:58.259 "message": "Invalid cntlid range [1-0]" 00:12:58.259 }' 00:12:58.259 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:58.259 { 00:12:58.259 "nqn": "nqn.2016-06.io.spdk:cnode163", 00:12:58.259 "max_cntlid": 0, 00:12:58.259 "method": "nvmf_create_subsystem", 00:12:58.259 "req_id": 1 00:12:58.259 } 00:12:58.259 Got JSON-RPC error response 00:12:58.259 response: 00:12:58.259 { 00:12:58.259 "code": -32602, 00:12:58.259 "message": "Invalid cntlid range [1-0]" 00:12:58.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.259 10:06:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11880 -I 65520 00:12:58.259 [2024-05-15 10:06:44.017111] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11880: invalid cntlid range [1-65520] 00:12:58.259 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:58.259 { 00:12:58.259 "nqn": "nqn.2016-06.io.spdk:cnode11880", 00:12:58.259 "max_cntlid": 65520, 00:12:58.259 "method": "nvmf_create_subsystem", 00:12:58.259 "req_id": 1 00:12:58.259 } 00:12:58.259 Got JSON-RPC error response 00:12:58.259 response: 00:12:58.259 { 00:12:58.259 "code": -32602, 00:12:58.259 "message": "Invalid cntlid range [1-65520]" 00:12:58.259 }' 00:12:58.259 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:58.259 { 00:12:58.259 "nqn": "nqn.2016-06.io.spdk:cnode11880", 00:12:58.259 "max_cntlid": 65520, 00:12:58.259 "method": "nvmf_create_subsystem", 00:12:58.259 "req_id": 1 00:12:58.259 } 00:12:58.259 Got JSON-RPC error response 00:12:58.259 response: 00:12:58.259 { 00:12:58.259 "code": -32602, 00:12:58.259 "message": "Invalid cntlid range [1-65520]" 00:12:58.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.259 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14660 -i 6 -I 5 00:12:58.521 [2024-05-15 10:06:44.189655] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14660: invalid cntlid range [6-5] 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:58.521 { 00:12:58.521 "nqn": "nqn.2016-06.io.spdk:cnode14660", 00:12:58.521 "min_cntlid": 6, 00:12:58.521 "max_cntlid": 5, 00:12:58.521 "method": "nvmf_create_subsystem", 00:12:58.521 "req_id": 1 00:12:58.521 } 00:12:58.521 Got JSON-RPC error response 00:12:58.521 response: 00:12:58.521 { 00:12:58.521 "code": -32602, 00:12:58.521 "message": "Invalid cntlid range [6-5]" 00:12:58.521 }' 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:58.521 { 00:12:58.521 "nqn": "nqn.2016-06.io.spdk:cnode14660", 00:12:58.521 "min_cntlid": 6, 00:12:58.521 "max_cntlid": 5, 00:12:58.521 "method": "nvmf_create_subsystem", 00:12:58.521 "req_id": 1 00:12:58.521 } 00:12:58.521 Got JSON-RPC error response 00:12:58.521 response: 00:12:58.521 { 00:12:58.521 "code": -32602, 00:12:58.521 "message": "Invalid cntlid range [6-5]" 00:12:58.521 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:58.521 { 00:12:58.521 "name": "foobar", 00:12:58.521 "method": "nvmf_delete_target", 00:12:58.521 "req_id": 1 00:12:58.521 } 00:12:58.521 Got JSON-RPC error response 00:12:58.521 response: 00:12:58.521 { 00:12:58.521 "code": -32602, 00:12:58.521 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:58.521 }' 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:58.521 { 00:12:58.521 "name": "foobar", 00:12:58.521 "method": "nvmf_delete_target", 00:12:58.521 "req_id": 1 00:12:58.521 } 00:12:58.521 Got JSON-RPC error response 00:12:58.521 response: 00:12:58.521 { 00:12:58.521 "code": -32602, 00:12:58.521 "message": "The specified target doesn't exist, cannot delete it." 00:12:58.521 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.521 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.783 rmmod nvme_tcp 00:12:58.783 rmmod nvme_fabrics 00:12:58.783 rmmod nvme_keyring 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2704145 ']' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2704145 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 2704145 ']' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 2704145 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2704145 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2704145' 00:12:58.783 killing process with pid 2704145 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 2704145 00:12:58.783 [2024-05-15 10:06:44.437773] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 2704145 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.783 10:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.334 10:06:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.334 00:13:01.334 real 0m13.385s 00:13:01.334 user 0m19.334s 00:13:01.334 sys 0m6.380s 00:13:01.334 10:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:01.334 10:06:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:01.334 ************************************ 00:13:01.334 END TEST nvmf_invalid 00:13:01.334 ************************************ 00:13:01.334 10:06:46 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:01.334 10:06:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:01.334 10:06:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:01.334 10:06:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.334 ************************************ 00:13:01.334 START TEST nvmf_abort 00:13:01.334 ************************************ 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:01.334 * Looking for test storage... 00:13:01.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.334 10:06:46 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.335 10:06:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:07.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:07.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.935 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:07.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:07.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.936 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.198 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.198 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.198 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:08.198 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.198 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.460 10:06:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:13:08.460 00:13:08.460 --- 10.0.0.2 ping statistics --- 00:13:08.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.460 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:13:08.460 00:13:08.460 --- 10.0.0.1 ping statistics --- 00:13:08.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.460 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2709264 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2709264 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 2709264 ']' 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:08.460 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.460 [2024-05-15 10:06:54.168602] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:13:08.460 [2024-05-15 10:06:54.168682] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.460 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.722 [2024-05-15 10:06:54.258914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:08.722 [2024-05-15 10:06:54.306132] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.722 [2024-05-15 10:06:54.306188] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.722 [2024-05-15 10:06:54.306196] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.722 [2024-05-15 10:06:54.306203] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.722 [2024-05-15 10:06:54.306208] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.722 [2024-05-15 10:06:54.306347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.722 [2024-05-15 10:06:54.306540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.722 [2024-05-15 10:06:54.306540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 [2024-05-15 10:06:54.992989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 Malloc0 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 Delay0 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 [2024-05-15 10:06:55.068461] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:09.297 [2024-05-15 10:06:55.068686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:09.297 10:06:55 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:09.559 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.559 [2024-05-15 10:06:55.191508] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:11.479 Initializing NVMe Controllers 00:13:11.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:11.480 controller IO queue size 128 less than required 00:13:11.480 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:11.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:11.480 Initialization complete. Launching workers. 00:13:11.480 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 28157 00:13:11.480 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28221, failed to submit 62 00:13:11.480 success 28161, unsuccess 60, failed 0 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:11.480 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:11.742 rmmod nvme_tcp 00:13:11.742 rmmod nvme_fabrics 00:13:11.742 rmmod nvme_keyring 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2709264 ']' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2709264 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 2709264 ']' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 2709264 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2709264 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2709264' 00:13:11.742 killing process with pid 2709264 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 2709264 00:13:11.742 [2024-05-15 10:06:57.398001] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 2709264 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.742 10:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.295 10:06:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.295 00:13:14.295 real 0m12.891s 00:13:14.295 user 0m13.232s 00:13:14.295 sys 0m6.464s 00:13:14.295 10:06:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:14.295 10:06:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:14.295 ************************************ 00:13:14.295 END TEST nvmf_abort 00:13:14.295 ************************************ 00:13:14.295 10:06:59 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:14.295 10:06:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:14.295 10:06:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:14.295 10:06:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:14.295 ************************************ 00:13:14.295 START TEST nvmf_ns_hotplug_stress 00:13:14.295 ************************************ 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:14.295 * Looking for test storage... 00:13:14.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.295 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.296 10:06:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.896 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:20.897 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:20.897 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:20.897 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:20.897 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.897 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:13:21.160 00:13:21.160 --- 10.0.0.2 ping statistics --- 00:13:21.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.160 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.410 ms 00:13:21.160 00:13:21.160 --- 10.0.0.1 ping statistics --- 00:13:21.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.160 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.160 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2714019 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2714019 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 2714019 ']' 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:21.422 10:07:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.422 [2024-05-15 10:07:07.034306] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:13:21.422 [2024-05-15 10:07:07.034369] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.422 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.422 [2024-05-15 10:07:07.123296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.422 [2024-05-15 10:07:07.170456] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.422 [2024-05-15 10:07:07.170510] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.422 [2024-05-15 10:07:07.170518] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.422 [2024-05-15 10:07:07.170525] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.422 [2024-05-15 10:07:07.170531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.422 [2024-05-15 10:07:07.170653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.422 [2024-05-15 10:07:07.170811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.422 [2024-05-15 10:07:07.170812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:22.422 10:07:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.422 [2024-05-15 10:07:07.981052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.422 10:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.422 10:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.683 [2024-05-15 10:07:08.322376] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:22.683 [2024-05-15 10:07:08.322617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.683 10:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:22.945 10:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:22.945 Malloc0 00:13:22.945 10:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:23.206 Delay0 00:13:23.206 10:07:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.467 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:23.467 NULL1 00:13:23.467 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:23.729 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2714489 00:13:23.729 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:23.729 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:23.729 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.729 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.991 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.991 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:23.991 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:24.253 true 00:13:24.253 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:24.253 10:07:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.253 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.520 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:24.520 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:24.783 true 00:13:24.783 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:24.783 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.783 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.045 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:25.045 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:25.045 true 00:13:25.306 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:25.306 10:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.306 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.568 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:25.568 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:25.568 true 00:13:25.830 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:25.830 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.830 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.091 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:26.091 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:26.091 true 00:13:26.352 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:26.352 10:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.352 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.613 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:26.613 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:26.613 true 00:13:26.613 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:26.613 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.873 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.135 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:27.135 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:27.135 true 00:13:27.135 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:27.135 10:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.396 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.657 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:27.657 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:27.657 true 00:13:27.657 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:27.657 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.917 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.178 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:28.178 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:28.178 true 00:13:28.178 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:28.178 10:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.439 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.439 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:28.439 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:28.699 true 00:13:28.699 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:28.699 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.960 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.960 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:28.960 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:29.221 true 00:13:29.221 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:29.221 10:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.482 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.482 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:29.482 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:29.743 true 00:13:29.743 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:29.743 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.003 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.003 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:30.003 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:30.264 true 00:13:30.264 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:30.264 10:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.526 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.526 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:30.526 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:30.787 true 00:13:30.787 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:30.787 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.787 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.048 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:31.048 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:31.309 true 00:13:31.309 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:31.309 10:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.309 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.569 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:31.569 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:31.830 true 00:13:31.830 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:31.830 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.830 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.091 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:32.091 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:32.352 true 00:13:32.352 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:32.352 10:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.352 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.613 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:32.613 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:32.613 true 00:13:32.875 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:32.875 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.875 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.136 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:33.136 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:33.136 true 00:13:33.136 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:33.136 10:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.397 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.659 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:33.659 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:33.659 true 00:13:33.659 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:33.659 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.921 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.183 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:34.183 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:34.183 true 00:13:34.183 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:34.183 10:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.445 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.708 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:34.708 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:34.708 true 00:13:34.708 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:34.708 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.971 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.233 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:35.233 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:35.233 true 00:13:35.233 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:35.233 10:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.495 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.756 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:35.756 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:35.756 true 00:13:35.756 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:35.756 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.018 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.018 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:36.018 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:36.279 true 00:13:36.279 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:36.279 10:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.541 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.541 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:36.541 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:36.802 true 00:13:36.802 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:36.802 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.064 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.064 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:37.064 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:37.325 true 00:13:37.325 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:37.325 10:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.325 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.586 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:37.586 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:37.850 true 00:13:37.850 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:37.850 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.850 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.174 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:38.174 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:38.174 true 00:13:38.174 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:38.174 10:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.436 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.697 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:38.697 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:38.697 true 00:13:38.697 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:38.697 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.959 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.221 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:39.221 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:39.221 true 00:13:39.221 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:39.221 10:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.484 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.484 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:39.484 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:39.746 true 00:13:39.746 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:39.746 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.006 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.006 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:13:40.006 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:40.268 true 00:13:40.268 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:40.268 10:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.530 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.530 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:13:40.530 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:40.792 true 00:13:40.792 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:40.792 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.053 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.053 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:13:41.053 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:41.314 true 00:13:41.314 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:41.314 10:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.575 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.575 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:13:41.575 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:41.835 true 00:13:41.835 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:41.835 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.835 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.095 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:13:42.095 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:42.356 true 00:13:42.356 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:42.356 10:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.356 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.617 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:13:42.617 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:42.878 true 00:13:42.878 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:42.878 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.878 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.139 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:13:43.139 10:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:43.400 true 00:13:43.400 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:43.400 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.400 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.661 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:13:43.661 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:43.922 true 00:13:43.922 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:43.922 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.922 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.183 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:13:44.183 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:44.183 true 00:13:44.444 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:44.444 10:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.444 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.705 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:13:44.705 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:13:44.705 true 00:13:44.966 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:44.966 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.966 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.227 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:13:45.227 10:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:13:45.227 true 00:13:45.227 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:45.227 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.486 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.748 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:13:45.748 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:13:45.748 true 00:13:45.748 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:45.748 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.010 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.272 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:13:46.272 10:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:13:46.272 true 00:13:46.272 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:46.272 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.535 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.795 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:13:46.795 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:13:46.795 true 00:13:46.795 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:46.795 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.056 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.056 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:13:47.056 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:13:47.317 true 00:13:47.317 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:47.317 10:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.579 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.579 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:13:47.579 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:13:47.840 true 00:13:47.840 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:47.840 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.840 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.102 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:13:48.102 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:13:48.364 true 00:13:48.364 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:48.364 10:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.364 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.626 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:13:48.626 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:13:48.626 true 00:13:48.888 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:48.888 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.888 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.150 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:13:49.150 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:13:49.150 true 00:13:49.150 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:49.150 10:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.412 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.673 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:13:49.673 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:13:49.673 true 00:13:49.673 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:49.673 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.936 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.936 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:13:49.936 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:13:50.197 true 00:13:50.197 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:50.197 10:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.458 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.458 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:13:50.458 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:13:50.719 true 00:13:50.719 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:50.719 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.981 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.981 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:13:50.981 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:13:51.243 true 00:13:51.243 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:51.243 10:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.243 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.504 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:13:51.504 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:13:51.766 true 00:13:51.766 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:51.766 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.766 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.026 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:13:52.026 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:13:52.287 true 00:13:52.287 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:52.287 10:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.287 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.548 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:13:52.548 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:13:52.809 true 00:13:52.809 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:52.809 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.809 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.070 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:13:53.070 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:13:53.332 true 00:13:53.332 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:53.332 10:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.332 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.594 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:13:53.594 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:13:53.594 true 00:13:53.856 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:53.856 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.856 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.125 Initializing NVMe Controllers 00:13:54.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:54.125 Controller IO queue size 128, less than required. 00:13:54.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:54.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:54.125 Initialization complete. Launching workers. 00:13:54.125 ======================================================== 00:13:54.125 Latency(us) 00:13:54.125 Device Information : IOPS MiB/s Average min max 00:13:54.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31258.05 15.26 4094.86 2054.99 11298.48 00:13:54.125 ======================================================== 00:13:54.125 Total : 31258.05 15.26 4094.86 2054.99 11298.48 00:13:54.125 00:13:54.125 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:13:54.125 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:13:54.125 true 00:13:54.125 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2714489 00:13:54.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2714489) - No such process 00:13:54.125 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2714489 00:13:54.125 10:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.429 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:54.692 null0 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.692 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:54.954 null1 00:13:54.954 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.954 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.954 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:55.216 null2 00:13:55.216 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.216 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.216 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:55.216 null3 00:13:55.216 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.216 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.216 10:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:55.477 null4 00:13:55.477 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.477 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.477 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:55.477 null5 00:13:55.739 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.739 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.739 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:55.739 null6 00:13:55.739 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.739 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.739 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:56.001 null7 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.001 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2721025 2721027 2721030 2721033 2721036 2721039 2721041 2721044 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.002 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.264 10:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.264 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.525 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.785 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.786 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.786 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.786 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.786 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.786 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.045 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.045 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.046 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.307 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.307 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.307 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.307 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.307 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.308 10:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.308 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.575 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.836 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.837 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.099 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.362 10:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.362 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.625 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.888 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.149 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.150 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.411 10:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.411 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:59.672 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.673 rmmod nvme_tcp 00:13:59.673 rmmod nvme_fabrics 00:13:59.673 rmmod nvme_keyring 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2714019 ']' 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2714019 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 2714019 ']' 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 2714019 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:59.673 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2714019 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2714019' 00:13:59.935 killing process with pid 2714019 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 2714019 00:13:59.935 [2024-05-15 10:07:45.502841] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 2714019 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.935 10:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.488 10:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.488 00:14:02.488 real 0m48.030s 00:14:02.488 user 3m16.823s 00:14:02.488 sys 0m16.900s 00:14:02.488 10:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:02.488 10:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.488 ************************************ 00:14:02.488 END TEST nvmf_ns_hotplug_stress 00:14:02.488 ************************************ 00:14:02.488 10:07:47 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.488 10:07:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:02.488 10:07:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:02.488 10:07:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:02.488 ************************************ 00:14:02.488 START TEST nvmf_connect_stress 00:14:02.488 ************************************ 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:02.488 * Looking for test storage... 00:14:02.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.488 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.489 10:07:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.087 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:09.088 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:09.088 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:09.088 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:09.088 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.805 ms 00:14:09.088 00:14:09.088 --- 10.0.0.2 ping statistics --- 00:14:09.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.088 rtt min/avg/max/mdev = 0.805/0.805/0.805/0.000 ms 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.483 ms 00:14:09.088 00:14:09.088 --- 10.0.0.1 ping statistics --- 00:14:09.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.088 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2726069 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2726069 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 2726069 ']' 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:09.088 10:07:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.088 [2024-05-15 10:07:54.530716] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:09.088 [2024-05-15 10:07:54.530781] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.088 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.089 [2024-05-15 10:07:54.619482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:09.089 [2024-05-15 10:07:54.666867] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.089 [2024-05-15 10:07:54.666923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.089 [2024-05-15 10:07:54.666931] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.089 [2024-05-15 10:07:54.666938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.089 [2024-05-15 10:07:54.666944] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.089 [2024-05-15 10:07:54.667074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.089 [2024-05-15 10:07:54.667239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.089 [2024-05-15 10:07:54.667240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.663 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:09.663 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.664 [2024-05-15 10:07:55.368829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.664 [2024-05-15 10:07:55.393122] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:09.664 [2024-05-15 10:07:55.414416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.664 NULL1 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2726172 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.664 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.926 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.188 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.188 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:10.188 10:07:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.188 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.188 10:07:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.449 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:10.449 10:07:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:10.449 10:07:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.449 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:10.449 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.023 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.023 10:07:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:11.023 10:07:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.023 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.023 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.285 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.285 10:07:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:11.285 10:07:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.285 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.285 10:07:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.547 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.547 10:07:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:11.547 10:07:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.547 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.547 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:11.809 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.809 10:07:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:11.809 10:07:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.809 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.809 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.070 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.070 10:07:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:12.070 10:07:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.070 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.070 10:07:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.643 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.643 10:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:12.643 10:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.643 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.643 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.905 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:12.905 10:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:12.905 10:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.905 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:12.905 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.166 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.166 10:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:13.166 10:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.166 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.166 10:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.427 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.427 10:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:13.427 10:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.427 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.427 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.690 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:13.690 10:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:13.690 10:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.690 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:13.690 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.264 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.264 10:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:14.264 10:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.264 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.264 10:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.526 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.526 10:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:14.526 10:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.526 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.526 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.788 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.788 10:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:14.788 10:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.788 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.788 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.051 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.051 10:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:15.051 10:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.051 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.051 10:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.316 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.316 10:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:15.316 10:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.316 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.316 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.640 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:15.640 10:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:15.640 10:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.640 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:15.640 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.213 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.213 10:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:16.213 10:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.213 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.213 10:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.475 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.475 10:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:16.475 10:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.475 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.475 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.737 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.737 10:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:16.737 10:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.737 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.737 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.999 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.999 10:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:16.999 10:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.999 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.999 10:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.261 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.261 10:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:17.261 10:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.261 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.261 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.848 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.848 10:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:17.848 10:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.848 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.848 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.109 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.109 10:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:18.109 10:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.109 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.109 10:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.370 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.370 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:18.370 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.370 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.370 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.633 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.633 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:18.633 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.633 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.633 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.895 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:18.895 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:18.895 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.895 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:18.895 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.469 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.469 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:19.469 10:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.469 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.469 10:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.731 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.731 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:19.731 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.731 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.731 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.994 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726172 00:14:19.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2726172) - No such process 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2726172 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.994 rmmod nvme_tcp 00:14:19.994 rmmod nvme_fabrics 00:14:19.994 rmmod nvme_keyring 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2726069 ']' 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2726069 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 2726069 ']' 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 2726069 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2726069 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2726069' 00:14:19.994 killing process with pid 2726069 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 2726069 00:14:19.994 [2024-05-15 10:08:05.767843] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:19.994 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 2726069 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.257 10:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.175 10:08:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.175 00:14:22.175 real 0m20.177s 00:14:22.175 user 0m41.672s 00:14:22.175 sys 0m8.471s 00:14:22.175 10:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:22.175 10:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.175 ************************************ 00:14:22.175 END TEST nvmf_connect_stress 00:14:22.175 ************************************ 00:14:22.438 10:08:07 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:22.438 10:08:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:22.438 10:08:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:22.438 10:08:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.438 ************************************ 00:14:22.438 START TEST nvmf_fused_ordering 00:14:22.438 ************************************ 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:22.438 * Looking for test storage... 00:14:22.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.438 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.439 10:08:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:30.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:30.594 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:30.594 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:30.594 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.594 10:08:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:30.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.762 ms 00:14:30.594 00:14:30.594 --- 10.0.0.2 ping statistics --- 00:14:30.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.594 rtt min/avg/max/mdev = 0.762/0.762/0.762/0.000 ms 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:14:30.594 00:14:30.594 --- 10.0.0.1 ping statistics --- 00:14:30.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.594 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.594 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2732453 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2732453 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 2732453 ']' 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:30.595 10:08:15 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 [2024-05-15 10:08:15.377833] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:30.595 [2024-05-15 10:08:15.377898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.595 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.595 [2024-05-15 10:08:15.465469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.595 [2024-05-15 10:08:15.511710] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.595 [2024-05-15 10:08:15.511765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.595 [2024-05-15 10:08:15.511772] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.595 [2024-05-15 10:08:15.511779] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.595 [2024-05-15 10:08:15.511785] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.595 [2024-05-15 10:08:15.511807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 [2024-05-15 10:08:16.207566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 [2024-05-15 10:08:16.231558] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:30.595 [2024-05-15 10:08:16.231830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 NULL1 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.595 10:08:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:30.595 [2024-05-15 10:08:16.300389] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:30.595 [2024-05-15 10:08:16.300450] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732574 ] 00:14:30.595 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.539 Attached to nqn.2016-06.io.spdk:cnode1 00:14:31.539 Namespace ID: 1 size: 1GB 00:14:31.539 fused_ordering(0) 00:14:31.539 fused_ordering(1) 00:14:31.539 fused_ordering(2) 00:14:31.539 fused_ordering(3) 00:14:31.539 fused_ordering(4) 00:14:31.539 fused_ordering(5) 00:14:31.539 fused_ordering(6) 00:14:31.539 fused_ordering(7) 00:14:31.539 fused_ordering(8) 00:14:31.539 fused_ordering(9) 00:14:31.539 fused_ordering(10) 00:14:31.539 fused_ordering(11) 00:14:31.539 fused_ordering(12) 00:14:31.539 fused_ordering(13) 00:14:31.539 fused_ordering(14) 00:14:31.539 fused_ordering(15) 00:14:31.539 fused_ordering(16) 00:14:31.540 fused_ordering(17) 00:14:31.540 fused_ordering(18) 00:14:31.540 fused_ordering(19) 00:14:31.540 fused_ordering(20) 00:14:31.540 fused_ordering(21) 00:14:31.540 fused_ordering(22) 00:14:31.540 fused_ordering(23) 00:14:31.540 fused_ordering(24) 00:14:31.540 fused_ordering(25) 00:14:31.540 fused_ordering(26) 00:14:31.540 fused_ordering(27) 00:14:31.540 fused_ordering(28) 00:14:31.540 fused_ordering(29) 00:14:31.540 fused_ordering(30) 00:14:31.540 fused_ordering(31) 00:14:31.540 fused_ordering(32) 00:14:31.540 fused_ordering(33) 00:14:31.540 fused_ordering(34) 00:14:31.540 fused_ordering(35) 00:14:31.540 fused_ordering(36) 00:14:31.540 fused_ordering(37) 00:14:31.540 fused_ordering(38) 00:14:31.540 fused_ordering(39) 00:14:31.540 fused_ordering(40) 00:14:31.540 fused_ordering(41) 00:14:31.540 fused_ordering(42) 00:14:31.540 fused_ordering(43) 00:14:31.540 fused_ordering(44) 00:14:31.540 fused_ordering(45) 00:14:31.540 fused_ordering(46) 00:14:31.540 fused_ordering(47) 00:14:31.540 fused_ordering(48) 00:14:31.540 fused_ordering(49) 00:14:31.540 fused_ordering(50) 00:14:31.540 fused_ordering(51) 00:14:31.540 fused_ordering(52) 00:14:31.540 fused_ordering(53) 00:14:31.540 fused_ordering(54) 00:14:31.540 fused_ordering(55) 00:14:31.540 fused_ordering(56) 00:14:31.540 fused_ordering(57) 00:14:31.540 fused_ordering(58) 00:14:31.540 fused_ordering(59) 00:14:31.540 fused_ordering(60) 00:14:31.540 fused_ordering(61) 00:14:31.540 fused_ordering(62) 00:14:31.540 fused_ordering(63) 00:14:31.540 fused_ordering(64) 00:14:31.540 fused_ordering(65) 00:14:31.540 fused_ordering(66) 00:14:31.540 fused_ordering(67) 00:14:31.540 fused_ordering(68) 00:14:31.540 fused_ordering(69) 00:14:31.540 fused_ordering(70) 00:14:31.540 fused_ordering(71) 00:14:31.540 fused_ordering(72) 00:14:31.540 fused_ordering(73) 00:14:31.540 fused_ordering(74) 00:14:31.540 fused_ordering(75) 00:14:31.540 fused_ordering(76) 00:14:31.540 fused_ordering(77) 00:14:31.540 fused_ordering(78) 00:14:31.540 fused_ordering(79) 00:14:31.540 fused_ordering(80) 00:14:31.540 fused_ordering(81) 00:14:31.540 fused_ordering(82) 00:14:31.540 fused_ordering(83) 00:14:31.540 fused_ordering(84) 00:14:31.540 fused_ordering(85) 00:14:31.540 fused_ordering(86) 00:14:31.540 fused_ordering(87) 00:14:31.540 fused_ordering(88) 00:14:31.540 fused_ordering(89) 00:14:31.540 fused_ordering(90) 00:14:31.540 fused_ordering(91) 00:14:31.540 fused_ordering(92) 00:14:31.540 fused_ordering(93) 00:14:31.540 fused_ordering(94) 00:14:31.540 fused_ordering(95) 00:14:31.540 fused_ordering(96) 00:14:31.540 fused_ordering(97) 00:14:31.540 fused_ordering(98) 00:14:31.540 fused_ordering(99) 00:14:31.540 fused_ordering(100) 00:14:31.540 fused_ordering(101) 00:14:31.540 fused_ordering(102) 00:14:31.540 fused_ordering(103) 00:14:31.540 fused_ordering(104) 00:14:31.540 fused_ordering(105) 00:14:31.540 fused_ordering(106) 00:14:31.540 fused_ordering(107) 00:14:31.540 fused_ordering(108) 00:14:31.540 fused_ordering(109) 00:14:31.540 fused_ordering(110) 00:14:31.540 fused_ordering(111) 00:14:31.540 fused_ordering(112) 00:14:31.540 fused_ordering(113) 00:14:31.540 fused_ordering(114) 00:14:31.540 fused_ordering(115) 00:14:31.540 fused_ordering(116) 00:14:31.540 fused_ordering(117) 00:14:31.540 fused_ordering(118) 00:14:31.540 fused_ordering(119) 00:14:31.540 fused_ordering(120) 00:14:31.540 fused_ordering(121) 00:14:31.540 fused_ordering(122) 00:14:31.540 fused_ordering(123) 00:14:31.540 fused_ordering(124) 00:14:31.540 fused_ordering(125) 00:14:31.540 fused_ordering(126) 00:14:31.540 fused_ordering(127) 00:14:31.540 fused_ordering(128) 00:14:31.540 fused_ordering(129) 00:14:31.540 fused_ordering(130) 00:14:31.540 fused_ordering(131) 00:14:31.540 fused_ordering(132) 00:14:31.540 fused_ordering(133) 00:14:31.540 fused_ordering(134) 00:14:31.540 fused_ordering(135) 00:14:31.540 fused_ordering(136) 00:14:31.540 fused_ordering(137) 00:14:31.540 fused_ordering(138) 00:14:31.540 fused_ordering(139) 00:14:31.540 fused_ordering(140) 00:14:31.540 fused_ordering(141) 00:14:31.540 fused_ordering(142) 00:14:31.540 fused_ordering(143) 00:14:31.540 fused_ordering(144) 00:14:31.540 fused_ordering(145) 00:14:31.540 fused_ordering(146) 00:14:31.540 fused_ordering(147) 00:14:31.540 fused_ordering(148) 00:14:31.540 fused_ordering(149) 00:14:31.540 fused_ordering(150) 00:14:31.540 fused_ordering(151) 00:14:31.540 fused_ordering(152) 00:14:31.540 fused_ordering(153) 00:14:31.540 fused_ordering(154) 00:14:31.540 fused_ordering(155) 00:14:31.540 fused_ordering(156) 00:14:31.540 fused_ordering(157) 00:14:31.540 fused_ordering(158) 00:14:31.540 fused_ordering(159) 00:14:31.540 fused_ordering(160) 00:14:31.540 fused_ordering(161) 00:14:31.540 fused_ordering(162) 00:14:31.540 fused_ordering(163) 00:14:31.540 fused_ordering(164) 00:14:31.540 fused_ordering(165) 00:14:31.540 fused_ordering(166) 00:14:31.540 fused_ordering(167) 00:14:31.540 fused_ordering(168) 00:14:31.540 fused_ordering(169) 00:14:31.540 fused_ordering(170) 00:14:31.540 fused_ordering(171) 00:14:31.540 fused_ordering(172) 00:14:31.540 fused_ordering(173) 00:14:31.540 fused_ordering(174) 00:14:31.540 fused_ordering(175) 00:14:31.540 fused_ordering(176) 00:14:31.540 fused_ordering(177) 00:14:31.540 fused_ordering(178) 00:14:31.540 fused_ordering(179) 00:14:31.540 fused_ordering(180) 00:14:31.540 fused_ordering(181) 00:14:31.540 fused_ordering(182) 00:14:31.540 fused_ordering(183) 00:14:31.540 fused_ordering(184) 00:14:31.540 fused_ordering(185) 00:14:31.540 fused_ordering(186) 00:14:31.540 fused_ordering(187) 00:14:31.540 fused_ordering(188) 00:14:31.540 fused_ordering(189) 00:14:31.540 fused_ordering(190) 00:14:31.540 fused_ordering(191) 00:14:31.540 fused_ordering(192) 00:14:31.540 fused_ordering(193) 00:14:31.540 fused_ordering(194) 00:14:31.540 fused_ordering(195) 00:14:31.540 fused_ordering(196) 00:14:31.540 fused_ordering(197) 00:14:31.540 fused_ordering(198) 00:14:31.540 fused_ordering(199) 00:14:31.540 fused_ordering(200) 00:14:31.540 fused_ordering(201) 00:14:31.540 fused_ordering(202) 00:14:31.540 fused_ordering(203) 00:14:31.540 fused_ordering(204) 00:14:31.540 fused_ordering(205) 00:14:32.485 fused_ordering(206) 00:14:32.485 fused_ordering(207) 00:14:32.485 fused_ordering(208) 00:14:32.485 fused_ordering(209) 00:14:32.485 fused_ordering(210) 00:14:32.485 fused_ordering(211) 00:14:32.485 fused_ordering(212) 00:14:32.485 fused_ordering(213) 00:14:32.485 fused_ordering(214) 00:14:32.485 fused_ordering(215) 00:14:32.485 fused_ordering(216) 00:14:32.485 fused_ordering(217) 00:14:32.485 fused_ordering(218) 00:14:32.485 fused_ordering(219) 00:14:32.485 fused_ordering(220) 00:14:32.485 fused_ordering(221) 00:14:32.485 fused_ordering(222) 00:14:32.485 fused_ordering(223) 00:14:32.485 fused_ordering(224) 00:14:32.485 fused_ordering(225) 00:14:32.485 fused_ordering(226) 00:14:32.485 fused_ordering(227) 00:14:32.485 fused_ordering(228) 00:14:32.485 fused_ordering(229) 00:14:32.485 fused_ordering(230) 00:14:32.485 fused_ordering(231) 00:14:32.485 fused_ordering(232) 00:14:32.485 fused_ordering(233) 00:14:32.485 fused_ordering(234) 00:14:32.485 fused_ordering(235) 00:14:32.485 fused_ordering(236) 00:14:32.485 fused_ordering(237) 00:14:32.485 fused_ordering(238) 00:14:32.485 fused_ordering(239) 00:14:32.485 fused_ordering(240) 00:14:32.485 fused_ordering(241) 00:14:32.485 fused_ordering(242) 00:14:32.485 fused_ordering(243) 00:14:32.485 fused_ordering(244) 00:14:32.485 fused_ordering(245) 00:14:32.485 fused_ordering(246) 00:14:32.485 fused_ordering(247) 00:14:32.485 fused_ordering(248) 00:14:32.485 fused_ordering(249) 00:14:32.485 fused_ordering(250) 00:14:32.485 fused_ordering(251) 00:14:32.485 fused_ordering(252) 00:14:32.485 fused_ordering(253) 00:14:32.485 fused_ordering(254) 00:14:32.485 fused_ordering(255) 00:14:32.485 fused_ordering(256) 00:14:32.485 fused_ordering(257) 00:14:32.485 fused_ordering(258) 00:14:32.485 fused_ordering(259) 00:14:32.485 fused_ordering(260) 00:14:32.485 fused_ordering(261) 00:14:32.485 fused_ordering(262) 00:14:32.485 fused_ordering(263) 00:14:32.485 fused_ordering(264) 00:14:32.485 fused_ordering(265) 00:14:32.485 fused_ordering(266) 00:14:32.485 fused_ordering(267) 00:14:32.485 fused_ordering(268) 00:14:32.485 fused_ordering(269) 00:14:32.485 fused_ordering(270) 00:14:32.485 fused_ordering(271) 00:14:32.485 fused_ordering(272) 00:14:32.485 fused_ordering(273) 00:14:32.485 fused_ordering(274) 00:14:32.485 fused_ordering(275) 00:14:32.485 fused_ordering(276) 00:14:32.485 fused_ordering(277) 00:14:32.485 fused_ordering(278) 00:14:32.485 fused_ordering(279) 00:14:32.485 fused_ordering(280) 00:14:32.485 fused_ordering(281) 00:14:32.485 fused_ordering(282) 00:14:32.485 fused_ordering(283) 00:14:32.485 fused_ordering(284) 00:14:32.485 fused_ordering(285) 00:14:32.485 fused_ordering(286) 00:14:32.485 fused_ordering(287) 00:14:32.485 fused_ordering(288) 00:14:32.485 fused_ordering(289) 00:14:32.485 fused_ordering(290) 00:14:32.485 fused_ordering(291) 00:14:32.485 fused_ordering(292) 00:14:32.485 fused_ordering(293) 00:14:32.485 fused_ordering(294) 00:14:32.485 fused_ordering(295) 00:14:32.485 fused_ordering(296) 00:14:32.485 fused_ordering(297) 00:14:32.485 fused_ordering(298) 00:14:32.485 fused_ordering(299) 00:14:32.485 fused_ordering(300) 00:14:32.485 fused_ordering(301) 00:14:32.485 fused_ordering(302) 00:14:32.485 fused_ordering(303) 00:14:32.485 fused_ordering(304) 00:14:32.485 fused_ordering(305) 00:14:32.485 fused_ordering(306) 00:14:32.485 fused_ordering(307) 00:14:32.485 fused_ordering(308) 00:14:32.485 fused_ordering(309) 00:14:32.485 fused_ordering(310) 00:14:32.485 fused_ordering(311) 00:14:32.485 fused_ordering(312) 00:14:32.485 fused_ordering(313) 00:14:32.485 fused_ordering(314) 00:14:32.485 fused_ordering(315) 00:14:32.485 fused_ordering(316) 00:14:32.485 fused_ordering(317) 00:14:32.485 fused_ordering(318) 00:14:32.485 fused_ordering(319) 00:14:32.485 fused_ordering(320) 00:14:32.485 fused_ordering(321) 00:14:32.485 fused_ordering(322) 00:14:32.485 fused_ordering(323) 00:14:32.485 fused_ordering(324) 00:14:32.485 fused_ordering(325) 00:14:32.485 fused_ordering(326) 00:14:32.485 fused_ordering(327) 00:14:32.485 fused_ordering(328) 00:14:32.485 fused_ordering(329) 00:14:32.485 fused_ordering(330) 00:14:32.485 fused_ordering(331) 00:14:32.485 fused_ordering(332) 00:14:32.485 fused_ordering(333) 00:14:32.485 fused_ordering(334) 00:14:32.485 fused_ordering(335) 00:14:32.485 fused_ordering(336) 00:14:32.485 fused_ordering(337) 00:14:32.485 fused_ordering(338) 00:14:32.485 fused_ordering(339) 00:14:32.485 fused_ordering(340) 00:14:32.485 fused_ordering(341) 00:14:32.485 fused_ordering(342) 00:14:32.485 fused_ordering(343) 00:14:32.485 fused_ordering(344) 00:14:32.485 fused_ordering(345) 00:14:32.485 fused_ordering(346) 00:14:32.485 fused_ordering(347) 00:14:32.485 fused_ordering(348) 00:14:32.485 fused_ordering(349) 00:14:32.485 fused_ordering(350) 00:14:32.485 fused_ordering(351) 00:14:32.485 fused_ordering(352) 00:14:32.485 fused_ordering(353) 00:14:32.485 fused_ordering(354) 00:14:32.485 fused_ordering(355) 00:14:32.485 fused_ordering(356) 00:14:32.485 fused_ordering(357) 00:14:32.485 fused_ordering(358) 00:14:32.485 fused_ordering(359) 00:14:32.485 fused_ordering(360) 00:14:32.485 fused_ordering(361) 00:14:32.485 fused_ordering(362) 00:14:32.485 fused_ordering(363) 00:14:32.485 fused_ordering(364) 00:14:32.485 fused_ordering(365) 00:14:32.485 fused_ordering(366) 00:14:32.485 fused_ordering(367) 00:14:32.485 fused_ordering(368) 00:14:32.485 fused_ordering(369) 00:14:32.485 fused_ordering(370) 00:14:32.485 fused_ordering(371) 00:14:32.485 fused_ordering(372) 00:14:32.485 fused_ordering(373) 00:14:32.485 fused_ordering(374) 00:14:32.485 fused_ordering(375) 00:14:32.485 fused_ordering(376) 00:14:32.485 fused_ordering(377) 00:14:32.485 fused_ordering(378) 00:14:32.485 fused_ordering(379) 00:14:32.485 fused_ordering(380) 00:14:32.485 fused_ordering(381) 00:14:32.485 fused_ordering(382) 00:14:32.485 fused_ordering(383) 00:14:32.485 fused_ordering(384) 00:14:32.485 fused_ordering(385) 00:14:32.485 fused_ordering(386) 00:14:32.485 fused_ordering(387) 00:14:32.485 fused_ordering(388) 00:14:32.485 fused_ordering(389) 00:14:32.485 fused_ordering(390) 00:14:32.485 fused_ordering(391) 00:14:32.485 fused_ordering(392) 00:14:32.485 fused_ordering(393) 00:14:32.485 fused_ordering(394) 00:14:32.485 fused_ordering(395) 00:14:32.485 fused_ordering(396) 00:14:32.485 fused_ordering(397) 00:14:32.485 fused_ordering(398) 00:14:32.485 fused_ordering(399) 00:14:32.485 fused_ordering(400) 00:14:32.485 fused_ordering(401) 00:14:32.485 fused_ordering(402) 00:14:32.485 fused_ordering(403) 00:14:32.485 fused_ordering(404) 00:14:32.485 fused_ordering(405) 00:14:32.485 fused_ordering(406) 00:14:32.485 fused_ordering(407) 00:14:32.485 fused_ordering(408) 00:14:32.485 fused_ordering(409) 00:14:32.485 fused_ordering(410) 00:14:33.059 fused_ordering(411) 00:14:33.059 fused_ordering(412) 00:14:33.059 fused_ordering(413) 00:14:33.059 fused_ordering(414) 00:14:33.059 fused_ordering(415) 00:14:33.059 fused_ordering(416) 00:14:33.059 fused_ordering(417) 00:14:33.059 fused_ordering(418) 00:14:33.059 fused_ordering(419) 00:14:33.059 fused_ordering(420) 00:14:33.059 fused_ordering(421) 00:14:33.059 fused_ordering(422) 00:14:33.059 fused_ordering(423) 00:14:33.059 fused_ordering(424) 00:14:33.059 fused_ordering(425) 00:14:33.059 fused_ordering(426) 00:14:33.059 fused_ordering(427) 00:14:33.059 fused_ordering(428) 00:14:33.059 fused_ordering(429) 00:14:33.059 fused_ordering(430) 00:14:33.059 fused_ordering(431) 00:14:33.059 fused_ordering(432) 00:14:33.059 fused_ordering(433) 00:14:33.059 fused_ordering(434) 00:14:33.059 fused_ordering(435) 00:14:33.059 fused_ordering(436) 00:14:33.059 fused_ordering(437) 00:14:33.059 fused_ordering(438) 00:14:33.059 fused_ordering(439) 00:14:33.059 fused_ordering(440) 00:14:33.059 fused_ordering(441) 00:14:33.059 fused_ordering(442) 00:14:33.059 fused_ordering(443) 00:14:33.059 fused_ordering(444) 00:14:33.059 fused_ordering(445) 00:14:33.059 fused_ordering(446) 00:14:33.059 fused_ordering(447) 00:14:33.059 fused_ordering(448) 00:14:33.059 fused_ordering(449) 00:14:33.059 fused_ordering(450) 00:14:33.059 fused_ordering(451) 00:14:33.059 fused_ordering(452) 00:14:33.059 fused_ordering(453) 00:14:33.059 fused_ordering(454) 00:14:33.059 fused_ordering(455) 00:14:33.059 fused_ordering(456) 00:14:33.059 fused_ordering(457) 00:14:33.059 fused_ordering(458) 00:14:33.059 fused_ordering(459) 00:14:33.059 fused_ordering(460) 00:14:33.059 fused_ordering(461) 00:14:33.059 fused_ordering(462) 00:14:33.059 fused_ordering(463) 00:14:33.059 fused_ordering(464) 00:14:33.059 fused_ordering(465) 00:14:33.059 fused_ordering(466) 00:14:33.059 fused_ordering(467) 00:14:33.059 fused_ordering(468) 00:14:33.059 fused_ordering(469) 00:14:33.059 fused_ordering(470) 00:14:33.059 fused_ordering(471) 00:14:33.059 fused_ordering(472) 00:14:33.059 fused_ordering(473) 00:14:33.059 fused_ordering(474) 00:14:33.059 fused_ordering(475) 00:14:33.059 fused_ordering(476) 00:14:33.059 fused_ordering(477) 00:14:33.059 fused_ordering(478) 00:14:33.059 fused_ordering(479) 00:14:33.059 fused_ordering(480) 00:14:33.059 fused_ordering(481) 00:14:33.059 fused_ordering(482) 00:14:33.059 fused_ordering(483) 00:14:33.059 fused_ordering(484) 00:14:33.059 fused_ordering(485) 00:14:33.059 fused_ordering(486) 00:14:33.059 fused_ordering(487) 00:14:33.059 fused_ordering(488) 00:14:33.059 fused_ordering(489) 00:14:33.059 fused_ordering(490) 00:14:33.059 fused_ordering(491) 00:14:33.059 fused_ordering(492) 00:14:33.059 fused_ordering(493) 00:14:33.059 fused_ordering(494) 00:14:33.059 fused_ordering(495) 00:14:33.059 fused_ordering(496) 00:14:33.059 fused_ordering(497) 00:14:33.059 fused_ordering(498) 00:14:33.059 fused_ordering(499) 00:14:33.059 fused_ordering(500) 00:14:33.059 fused_ordering(501) 00:14:33.059 fused_ordering(502) 00:14:33.059 fused_ordering(503) 00:14:33.059 fused_ordering(504) 00:14:33.059 fused_ordering(505) 00:14:33.059 fused_ordering(506) 00:14:33.059 fused_ordering(507) 00:14:33.059 fused_ordering(508) 00:14:33.059 fused_ordering(509) 00:14:33.059 fused_ordering(510) 00:14:33.059 fused_ordering(511) 00:14:33.059 fused_ordering(512) 00:14:33.059 fused_ordering(513) 00:14:33.059 fused_ordering(514) 00:14:33.059 fused_ordering(515) 00:14:33.059 fused_ordering(516) 00:14:33.059 fused_ordering(517) 00:14:33.059 fused_ordering(518) 00:14:33.059 fused_ordering(519) 00:14:33.059 fused_ordering(520) 00:14:33.059 fused_ordering(521) 00:14:33.059 fused_ordering(522) 00:14:33.059 fused_ordering(523) 00:14:33.059 fused_ordering(524) 00:14:33.059 fused_ordering(525) 00:14:33.059 fused_ordering(526) 00:14:33.059 fused_ordering(527) 00:14:33.059 fused_ordering(528) 00:14:33.059 fused_ordering(529) 00:14:33.059 fused_ordering(530) 00:14:33.059 fused_ordering(531) 00:14:33.059 fused_ordering(532) 00:14:33.059 fused_ordering(533) 00:14:33.059 fused_ordering(534) 00:14:33.059 fused_ordering(535) 00:14:33.059 fused_ordering(536) 00:14:33.059 fused_ordering(537) 00:14:33.059 fused_ordering(538) 00:14:33.059 fused_ordering(539) 00:14:33.059 fused_ordering(540) 00:14:33.059 fused_ordering(541) 00:14:33.059 fused_ordering(542) 00:14:33.059 fused_ordering(543) 00:14:33.059 fused_ordering(544) 00:14:33.059 fused_ordering(545) 00:14:33.059 fused_ordering(546) 00:14:33.059 fused_ordering(547) 00:14:33.059 fused_ordering(548) 00:14:33.059 fused_ordering(549) 00:14:33.059 fused_ordering(550) 00:14:33.059 fused_ordering(551) 00:14:33.059 fused_ordering(552) 00:14:33.059 fused_ordering(553) 00:14:33.059 fused_ordering(554) 00:14:33.059 fused_ordering(555) 00:14:33.059 fused_ordering(556) 00:14:33.059 fused_ordering(557) 00:14:33.059 fused_ordering(558) 00:14:33.059 fused_ordering(559) 00:14:33.059 fused_ordering(560) 00:14:33.059 fused_ordering(561) 00:14:33.059 fused_ordering(562) 00:14:33.059 fused_ordering(563) 00:14:33.059 fused_ordering(564) 00:14:33.059 fused_ordering(565) 00:14:33.059 fused_ordering(566) 00:14:33.059 fused_ordering(567) 00:14:33.059 fused_ordering(568) 00:14:33.059 fused_ordering(569) 00:14:33.059 fused_ordering(570) 00:14:33.059 fused_ordering(571) 00:14:33.059 fused_ordering(572) 00:14:33.059 fused_ordering(573) 00:14:33.059 fused_ordering(574) 00:14:33.059 fused_ordering(575) 00:14:33.059 fused_ordering(576) 00:14:33.059 fused_ordering(577) 00:14:33.059 fused_ordering(578) 00:14:33.059 fused_ordering(579) 00:14:33.059 fused_ordering(580) 00:14:33.059 fused_ordering(581) 00:14:33.059 fused_ordering(582) 00:14:33.059 fused_ordering(583) 00:14:33.059 fused_ordering(584) 00:14:33.059 fused_ordering(585) 00:14:33.059 fused_ordering(586) 00:14:33.059 fused_ordering(587) 00:14:33.059 fused_ordering(588) 00:14:33.059 fused_ordering(589) 00:14:33.059 fused_ordering(590) 00:14:33.059 fused_ordering(591) 00:14:33.059 fused_ordering(592) 00:14:33.059 fused_ordering(593) 00:14:33.059 fused_ordering(594) 00:14:33.059 fused_ordering(595) 00:14:33.059 fused_ordering(596) 00:14:33.059 fused_ordering(597) 00:14:33.059 fused_ordering(598) 00:14:33.059 fused_ordering(599) 00:14:33.059 fused_ordering(600) 00:14:33.059 fused_ordering(601) 00:14:33.059 fused_ordering(602) 00:14:33.059 fused_ordering(603) 00:14:33.059 fused_ordering(604) 00:14:33.059 fused_ordering(605) 00:14:33.059 fused_ordering(606) 00:14:33.059 fused_ordering(607) 00:14:33.059 fused_ordering(608) 00:14:33.059 fused_ordering(609) 00:14:33.059 fused_ordering(610) 00:14:33.059 fused_ordering(611) 00:14:33.059 fused_ordering(612) 00:14:33.059 fused_ordering(613) 00:14:33.059 fused_ordering(614) 00:14:33.059 fused_ordering(615) 00:14:34.004 fused_ordering(616) 00:14:34.004 fused_ordering(617) 00:14:34.004 fused_ordering(618) 00:14:34.004 fused_ordering(619) 00:14:34.004 fused_ordering(620) 00:14:34.004 fused_ordering(621) 00:14:34.004 fused_ordering(622) 00:14:34.004 fused_ordering(623) 00:14:34.004 fused_ordering(624) 00:14:34.004 fused_ordering(625) 00:14:34.004 fused_ordering(626) 00:14:34.004 fused_ordering(627) 00:14:34.004 fused_ordering(628) 00:14:34.004 fused_ordering(629) 00:14:34.004 fused_ordering(630) 00:14:34.004 fused_ordering(631) 00:14:34.004 fused_ordering(632) 00:14:34.004 fused_ordering(633) 00:14:34.004 fused_ordering(634) 00:14:34.004 fused_ordering(635) 00:14:34.004 fused_ordering(636) 00:14:34.004 fused_ordering(637) 00:14:34.004 fused_ordering(638) 00:14:34.004 fused_ordering(639) 00:14:34.004 fused_ordering(640) 00:14:34.004 fused_ordering(641) 00:14:34.004 fused_ordering(642) 00:14:34.004 fused_ordering(643) 00:14:34.004 fused_ordering(644) 00:14:34.004 fused_ordering(645) 00:14:34.004 fused_ordering(646) 00:14:34.004 fused_ordering(647) 00:14:34.004 fused_ordering(648) 00:14:34.004 fused_ordering(649) 00:14:34.004 fused_ordering(650) 00:14:34.004 fused_ordering(651) 00:14:34.004 fused_ordering(652) 00:14:34.004 fused_ordering(653) 00:14:34.004 fused_ordering(654) 00:14:34.004 fused_ordering(655) 00:14:34.004 fused_ordering(656) 00:14:34.004 fused_ordering(657) 00:14:34.004 fused_ordering(658) 00:14:34.004 fused_ordering(659) 00:14:34.004 fused_ordering(660) 00:14:34.004 fused_ordering(661) 00:14:34.004 fused_ordering(662) 00:14:34.004 fused_ordering(663) 00:14:34.004 fused_ordering(664) 00:14:34.004 fused_ordering(665) 00:14:34.004 fused_ordering(666) 00:14:34.004 fused_ordering(667) 00:14:34.004 fused_ordering(668) 00:14:34.004 fused_ordering(669) 00:14:34.004 fused_ordering(670) 00:14:34.004 fused_ordering(671) 00:14:34.004 fused_ordering(672) 00:14:34.004 fused_ordering(673) 00:14:34.004 fused_ordering(674) 00:14:34.004 fused_ordering(675) 00:14:34.004 fused_ordering(676) 00:14:34.004 fused_ordering(677) 00:14:34.004 fused_ordering(678) 00:14:34.004 fused_ordering(679) 00:14:34.004 fused_ordering(680) 00:14:34.004 fused_ordering(681) 00:14:34.004 fused_ordering(682) 00:14:34.004 fused_ordering(683) 00:14:34.004 fused_ordering(684) 00:14:34.004 fused_ordering(685) 00:14:34.004 fused_ordering(686) 00:14:34.004 fused_ordering(687) 00:14:34.004 fused_ordering(688) 00:14:34.004 fused_ordering(689) 00:14:34.004 fused_ordering(690) 00:14:34.004 fused_ordering(691) 00:14:34.004 fused_ordering(692) 00:14:34.004 fused_ordering(693) 00:14:34.004 fused_ordering(694) 00:14:34.004 fused_ordering(695) 00:14:34.004 fused_ordering(696) 00:14:34.004 fused_ordering(697) 00:14:34.004 fused_ordering(698) 00:14:34.004 fused_ordering(699) 00:14:34.004 fused_ordering(700) 00:14:34.004 fused_ordering(701) 00:14:34.004 fused_ordering(702) 00:14:34.004 fused_ordering(703) 00:14:34.004 fused_ordering(704) 00:14:34.004 fused_ordering(705) 00:14:34.004 fused_ordering(706) 00:14:34.004 fused_ordering(707) 00:14:34.004 fused_ordering(708) 00:14:34.004 fused_ordering(709) 00:14:34.004 fused_ordering(710) 00:14:34.004 fused_ordering(711) 00:14:34.004 fused_ordering(712) 00:14:34.004 fused_ordering(713) 00:14:34.004 fused_ordering(714) 00:14:34.004 fused_ordering(715) 00:14:34.004 fused_ordering(716) 00:14:34.004 fused_ordering(717) 00:14:34.004 fused_ordering(718) 00:14:34.004 fused_ordering(719) 00:14:34.004 fused_ordering(720) 00:14:34.004 fused_ordering(721) 00:14:34.004 fused_ordering(722) 00:14:34.004 fused_ordering(723) 00:14:34.004 fused_ordering(724) 00:14:34.004 fused_ordering(725) 00:14:34.004 fused_ordering(726) 00:14:34.004 fused_ordering(727) 00:14:34.004 fused_ordering(728) 00:14:34.004 fused_ordering(729) 00:14:34.004 fused_ordering(730) 00:14:34.004 fused_ordering(731) 00:14:34.004 fused_ordering(732) 00:14:34.004 fused_ordering(733) 00:14:34.004 fused_ordering(734) 00:14:34.004 fused_ordering(735) 00:14:34.004 fused_ordering(736) 00:14:34.004 fused_ordering(737) 00:14:34.004 fused_ordering(738) 00:14:34.004 fused_ordering(739) 00:14:34.004 fused_ordering(740) 00:14:34.004 fused_ordering(741) 00:14:34.004 fused_ordering(742) 00:14:34.004 fused_ordering(743) 00:14:34.004 fused_ordering(744) 00:14:34.004 fused_ordering(745) 00:14:34.004 fused_ordering(746) 00:14:34.004 fused_ordering(747) 00:14:34.004 fused_ordering(748) 00:14:34.004 fused_ordering(749) 00:14:34.004 fused_ordering(750) 00:14:34.004 fused_ordering(751) 00:14:34.004 fused_ordering(752) 00:14:34.004 fused_ordering(753) 00:14:34.004 fused_ordering(754) 00:14:34.004 fused_ordering(755) 00:14:34.004 fused_ordering(756) 00:14:34.004 fused_ordering(757) 00:14:34.004 fused_ordering(758) 00:14:34.004 fused_ordering(759) 00:14:34.004 fused_ordering(760) 00:14:34.004 fused_ordering(761) 00:14:34.004 fused_ordering(762) 00:14:34.004 fused_ordering(763) 00:14:34.004 fused_ordering(764) 00:14:34.004 fused_ordering(765) 00:14:34.004 fused_ordering(766) 00:14:34.004 fused_ordering(767) 00:14:34.004 fused_ordering(768) 00:14:34.004 fused_ordering(769) 00:14:34.004 fused_ordering(770) 00:14:34.004 fused_ordering(771) 00:14:34.004 fused_ordering(772) 00:14:34.004 fused_ordering(773) 00:14:34.004 fused_ordering(774) 00:14:34.004 fused_ordering(775) 00:14:34.004 fused_ordering(776) 00:14:34.004 fused_ordering(777) 00:14:34.004 fused_ordering(778) 00:14:34.004 fused_ordering(779) 00:14:34.004 fused_ordering(780) 00:14:34.004 fused_ordering(781) 00:14:34.004 fused_ordering(782) 00:14:34.004 fused_ordering(783) 00:14:34.004 fused_ordering(784) 00:14:34.004 fused_ordering(785) 00:14:34.004 fused_ordering(786) 00:14:34.004 fused_ordering(787) 00:14:34.004 fused_ordering(788) 00:14:34.004 fused_ordering(789) 00:14:34.004 fused_ordering(790) 00:14:34.004 fused_ordering(791) 00:14:34.004 fused_ordering(792) 00:14:34.004 fused_ordering(793) 00:14:34.004 fused_ordering(794) 00:14:34.004 fused_ordering(795) 00:14:34.004 fused_ordering(796) 00:14:34.004 fused_ordering(797) 00:14:34.004 fused_ordering(798) 00:14:34.004 fused_ordering(799) 00:14:34.004 fused_ordering(800) 00:14:34.004 fused_ordering(801) 00:14:34.004 fused_ordering(802) 00:14:34.004 fused_ordering(803) 00:14:34.004 fused_ordering(804) 00:14:34.004 fused_ordering(805) 00:14:34.004 fused_ordering(806) 00:14:34.004 fused_ordering(807) 00:14:34.004 fused_ordering(808) 00:14:34.004 fused_ordering(809) 00:14:34.004 fused_ordering(810) 00:14:34.004 fused_ordering(811) 00:14:34.004 fused_ordering(812) 00:14:34.004 fused_ordering(813) 00:14:34.004 fused_ordering(814) 00:14:34.004 fused_ordering(815) 00:14:34.004 fused_ordering(816) 00:14:34.004 fused_ordering(817) 00:14:34.004 fused_ordering(818) 00:14:34.004 fused_ordering(819) 00:14:34.004 fused_ordering(820) 00:14:35.390 fused_ordering(821) 00:14:35.390 fused_ordering(822) 00:14:35.390 fused_ordering(823) 00:14:35.390 fused_ordering(824) 00:14:35.390 fused_ordering(825) 00:14:35.390 fused_ordering(826) 00:14:35.390 fused_ordering(827) 00:14:35.390 fused_ordering(828) 00:14:35.390 fused_ordering(829) 00:14:35.390 fused_ordering(830) 00:14:35.390 fused_ordering(831) 00:14:35.390 fused_ordering(832) 00:14:35.390 fused_ordering(833) 00:14:35.390 fused_ordering(834) 00:14:35.390 fused_ordering(835) 00:14:35.390 fused_ordering(836) 00:14:35.390 fused_ordering(837) 00:14:35.390 fused_ordering(838) 00:14:35.390 fused_ordering(839) 00:14:35.390 fused_ordering(840) 00:14:35.390 fused_ordering(841) 00:14:35.390 fused_ordering(842) 00:14:35.390 fused_ordering(843) 00:14:35.390 fused_ordering(844) 00:14:35.390 fused_ordering(845) 00:14:35.390 fused_ordering(846) 00:14:35.390 fused_ordering(847) 00:14:35.390 fused_ordering(848) 00:14:35.390 fused_ordering(849) 00:14:35.390 fused_ordering(850) 00:14:35.390 fused_ordering(851) 00:14:35.390 fused_ordering(852) 00:14:35.390 fused_ordering(853) 00:14:35.390 fused_ordering(854) 00:14:35.390 fused_ordering(855) 00:14:35.390 fused_ordering(856) 00:14:35.390 fused_ordering(857) 00:14:35.390 fused_ordering(858) 00:14:35.390 fused_ordering(859) 00:14:35.390 fused_ordering(860) 00:14:35.390 fused_ordering(861) 00:14:35.390 fused_ordering(862) 00:14:35.390 fused_ordering(863) 00:14:35.390 fused_ordering(864) 00:14:35.390 fused_ordering(865) 00:14:35.390 fused_ordering(866) 00:14:35.390 fused_ordering(867) 00:14:35.390 fused_ordering(868) 00:14:35.390 fused_ordering(869) 00:14:35.390 fused_ordering(870) 00:14:35.390 fused_ordering(871) 00:14:35.390 fused_ordering(872) 00:14:35.390 fused_ordering(873) 00:14:35.390 fused_ordering(874) 00:14:35.390 fused_ordering(875) 00:14:35.390 fused_ordering(876) 00:14:35.390 fused_ordering(877) 00:14:35.390 fused_ordering(878) 00:14:35.390 fused_ordering(879) 00:14:35.390 fused_ordering(880) 00:14:35.390 fused_ordering(881) 00:14:35.390 fused_ordering(882) 00:14:35.390 fused_ordering(883) 00:14:35.390 fused_ordering(884) 00:14:35.390 fused_ordering(885) 00:14:35.390 fused_ordering(886) 00:14:35.390 fused_ordering(887) 00:14:35.390 fused_ordering(888) 00:14:35.390 fused_ordering(889) 00:14:35.390 fused_ordering(890) 00:14:35.390 fused_ordering(891) 00:14:35.390 fused_ordering(892) 00:14:35.390 fused_ordering(893) 00:14:35.390 fused_ordering(894) 00:14:35.390 fused_ordering(895) 00:14:35.390 fused_ordering(896) 00:14:35.390 fused_ordering(897) 00:14:35.390 fused_ordering(898) 00:14:35.390 fused_ordering(899) 00:14:35.390 fused_ordering(900) 00:14:35.390 fused_ordering(901) 00:14:35.390 fused_ordering(902) 00:14:35.390 fused_ordering(903) 00:14:35.390 fused_ordering(904) 00:14:35.390 fused_ordering(905) 00:14:35.390 fused_ordering(906) 00:14:35.390 fused_ordering(907) 00:14:35.390 fused_ordering(908) 00:14:35.390 fused_ordering(909) 00:14:35.390 fused_ordering(910) 00:14:35.390 fused_ordering(911) 00:14:35.390 fused_ordering(912) 00:14:35.390 fused_ordering(913) 00:14:35.390 fused_ordering(914) 00:14:35.390 fused_ordering(915) 00:14:35.390 fused_ordering(916) 00:14:35.390 fused_ordering(917) 00:14:35.390 fused_ordering(918) 00:14:35.390 fused_ordering(919) 00:14:35.390 fused_ordering(920) 00:14:35.390 fused_ordering(921) 00:14:35.390 fused_ordering(922) 00:14:35.390 fused_ordering(923) 00:14:35.390 fused_ordering(924) 00:14:35.390 fused_ordering(925) 00:14:35.390 fused_ordering(926) 00:14:35.390 fused_ordering(927) 00:14:35.390 fused_ordering(928) 00:14:35.390 fused_ordering(929) 00:14:35.390 fused_ordering(930) 00:14:35.390 fused_ordering(931) 00:14:35.390 fused_ordering(932) 00:14:35.390 fused_ordering(933) 00:14:35.390 fused_ordering(934) 00:14:35.390 fused_ordering(935) 00:14:35.390 fused_ordering(936) 00:14:35.390 fused_ordering(937) 00:14:35.390 fused_ordering(938) 00:14:35.390 fused_ordering(939) 00:14:35.390 fused_ordering(940) 00:14:35.390 fused_ordering(941) 00:14:35.390 fused_ordering(942) 00:14:35.390 fused_ordering(943) 00:14:35.390 fused_ordering(944) 00:14:35.390 fused_ordering(945) 00:14:35.390 fused_ordering(946) 00:14:35.390 fused_ordering(947) 00:14:35.390 fused_ordering(948) 00:14:35.390 fused_ordering(949) 00:14:35.390 fused_ordering(950) 00:14:35.390 fused_ordering(951) 00:14:35.390 fused_ordering(952) 00:14:35.390 fused_ordering(953) 00:14:35.390 fused_ordering(954) 00:14:35.390 fused_ordering(955) 00:14:35.390 fused_ordering(956) 00:14:35.390 fused_ordering(957) 00:14:35.390 fused_ordering(958) 00:14:35.390 fused_ordering(959) 00:14:35.390 fused_ordering(960) 00:14:35.390 fused_ordering(961) 00:14:35.390 fused_ordering(962) 00:14:35.390 fused_ordering(963) 00:14:35.390 fused_ordering(964) 00:14:35.390 fused_ordering(965) 00:14:35.390 fused_ordering(966) 00:14:35.390 fused_ordering(967) 00:14:35.390 fused_ordering(968) 00:14:35.390 fused_ordering(969) 00:14:35.390 fused_ordering(970) 00:14:35.390 fused_ordering(971) 00:14:35.390 fused_ordering(972) 00:14:35.390 fused_ordering(973) 00:14:35.390 fused_ordering(974) 00:14:35.390 fused_ordering(975) 00:14:35.390 fused_ordering(976) 00:14:35.390 fused_ordering(977) 00:14:35.390 fused_ordering(978) 00:14:35.390 fused_ordering(979) 00:14:35.390 fused_ordering(980) 00:14:35.390 fused_ordering(981) 00:14:35.390 fused_ordering(982) 00:14:35.390 fused_ordering(983) 00:14:35.390 fused_ordering(984) 00:14:35.390 fused_ordering(985) 00:14:35.390 fused_ordering(986) 00:14:35.390 fused_ordering(987) 00:14:35.390 fused_ordering(988) 00:14:35.390 fused_ordering(989) 00:14:35.390 fused_ordering(990) 00:14:35.390 fused_ordering(991) 00:14:35.390 fused_ordering(992) 00:14:35.390 fused_ordering(993) 00:14:35.390 fused_ordering(994) 00:14:35.390 fused_ordering(995) 00:14:35.390 fused_ordering(996) 00:14:35.390 fused_ordering(997) 00:14:35.390 fused_ordering(998) 00:14:35.390 fused_ordering(999) 00:14:35.390 fused_ordering(1000) 00:14:35.390 fused_ordering(1001) 00:14:35.390 fused_ordering(1002) 00:14:35.391 fused_ordering(1003) 00:14:35.391 fused_ordering(1004) 00:14:35.391 fused_ordering(1005) 00:14:35.391 fused_ordering(1006) 00:14:35.391 fused_ordering(1007) 00:14:35.391 fused_ordering(1008) 00:14:35.391 fused_ordering(1009) 00:14:35.391 fused_ordering(1010) 00:14:35.391 fused_ordering(1011) 00:14:35.391 fused_ordering(1012) 00:14:35.391 fused_ordering(1013) 00:14:35.391 fused_ordering(1014) 00:14:35.391 fused_ordering(1015) 00:14:35.391 fused_ordering(1016) 00:14:35.391 fused_ordering(1017) 00:14:35.391 fused_ordering(1018) 00:14:35.391 fused_ordering(1019) 00:14:35.391 fused_ordering(1020) 00:14:35.391 fused_ordering(1021) 00:14:35.391 fused_ordering(1022) 00:14:35.391 fused_ordering(1023) 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.391 rmmod nvme_tcp 00:14:35.391 rmmod nvme_fabrics 00:14:35.391 rmmod nvme_keyring 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2732453 ']' 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2732453 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 2732453 ']' 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 2732453 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2732453 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2732453' 00:14:35.391 killing process with pid 2732453 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 2732453 00:14:35.391 [2024-05-15 10:08:20.944794] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:35.391 10:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 2732453 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.391 10:08:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.941 10:08:23 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.941 00:14:37.941 real 0m15.098s 00:14:37.941 user 0m9.103s 00:14:37.941 sys 0m8.787s 00:14:37.941 10:08:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:37.941 10:08:23 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.941 ************************************ 00:14:37.941 END TEST nvmf_fused_ordering 00:14:37.941 ************************************ 00:14:37.941 10:08:23 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.941 10:08:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:37.941 10:08:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:37.941 10:08:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:37.941 ************************************ 00:14:37.941 START TEST nvmf_delete_subsystem 00:14:37.942 ************************************ 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:37.942 * Looking for test storage... 00:14:37.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:37.942 10:08:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:44.577 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:44.577 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:44.577 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:44.577 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.577 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:44.578 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:44.578 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.578 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.578 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:44.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:14:44.840 00:14:44.840 --- 10.0.0.2 ping statistics --- 00:14:44.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.840 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.496 ms 00:14:44.840 00:14:44.840 --- 10.0.0.1 ping statistics --- 00:14:44.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.840 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2737798 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2737798 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 2737798 ']' 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:44.840 10:08:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.103 [2024-05-15 10:08:30.665440] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:14:45.103 [2024-05-15 10:08:30.665506] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.103 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.103 [2024-05-15 10:08:30.737252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.103 [2024-05-15 10:08:30.776435] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.103 [2024-05-15 10:08:30.776480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.103 [2024-05-15 10:08:30.776488] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.103 [2024-05-15 10:08:30.776495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.103 [2024-05-15 10:08:30.776501] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.103 [2024-05-15 10:08:30.776649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.103 [2024-05-15 10:08:30.776652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.676 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:45.676 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:14:45.676 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:45.676 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:45.676 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 [2024-05-15 10:08:31.488454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 [2024-05-15 10:08:31.504441] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:45.938 [2024-05-15 10:08:31.504623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 NULL1 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 Delay0 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2737826 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:45.938 10:08:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:45.938 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.938 [2024-05-15 10:08:31.589285] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.855 10:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.855 10:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:47.855 10:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Write completed with error (sct=0, sc=8) 00:14:48.117 starting I/O failed: -6 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Write completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 starting I/O failed: -6 00:14:48.117 Write completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Write completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 starting I/O failed: -6 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Write completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 starting I/O failed: -6 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 starting I/O failed: -6 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 Read completed with error (sct=0, sc=8) 00:14:48.117 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 [2024-05-15 10:08:33.677334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80ca10 is same with the state(5) to be set 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 starting I/O failed: -6 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 [2024-05-15 10:08:33.679019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5b8000c00 is same with the state(5) to be set 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Read completed with error (sct=0, sc=8) 00:14:48.118 Write completed with error (sct=0, sc=8) 00:14:49.063 [2024-05-15 10:08:34.649831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80c830 is same with the state(5) to be set 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 [2024-05-15 10:08:34.680950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829c60 is same with the state(5) to be set 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 [2024-05-15 10:08:34.681879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5b800bfe0 is same with the state(5) to be set 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Write completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.063 Read completed with error (sct=0, sc=8) 00:14:49.064 [2024-05-15 10:08:34.681961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff5b800c600 is same with the state(5) to be set 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Write completed with error (sct=0, sc=8) 00:14:49.064 Write completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Write completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Read completed with error (sct=0, sc=8) 00:14:49.064 Write completed with error (sct=0, sc=8) 00:14:49.064 [2024-05-15 10:08:34.682148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x829640 is same with the state(5) to be set 00:14:49.064 Initializing NVMe Controllers 00:14:49.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.064 Controller IO queue size 128, less than required. 00:14:49.064 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:49.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:49.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:49.064 Initialization complete. Launching workers. 00:14:49.064 ======================================================== 00:14:49.064 Latency(us) 00:14:49.064 Device Information : IOPS MiB/s Average min max 00:14:49.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.53 0.08 886898.66 245.29 1009912.75 00:14:49.064 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.59 0.08 913056.27 312.18 1010616.95 00:14:49.064 ======================================================== 00:14:49.064 Total : 336.12 0.16 899551.82 245.29 1010616.95 00:14:49.064 00:14:49.064 [2024-05-15 10:08:34.682728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80c830 (9): Bad file descriptor 00:14:49.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:49.064 10:08:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.064 10:08:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:49.064 10:08:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2737826 00:14:49.064 10:08:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2737826 00:14:49.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2737826) - No such process 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2737826 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 2737826 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 2737826 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:49.637 [2024-05-15 10:08:35.214195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2738593 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:49.637 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:49.637 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.637 [2024-05-15 10:08:35.280406] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.210 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.210 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:50.210 10:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:50.471 10:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:50.471 10:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:50.471 10:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.045 10:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.045 10:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:51.045 10:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:51.619 10:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:51.619 10:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:51.619 10:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.192 10:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.192 10:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:52.192 10:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.765 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:52.765 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:52.765 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:52.765 Initializing NVMe Controllers 00:14:52.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.765 Controller IO queue size 128, less than required. 00:14:52.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:52.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:52.765 Initialization complete. Launching workers. 00:14:52.765 ======================================================== 00:14:52.765 Latency(us) 00:14:52.765 Device Information : IOPS MiB/s Average min max 00:14:52.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002870.01 1000230.60 1043522.46 00:14:52.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003685.12 1000649.83 1008962.49 00:14:52.765 ======================================================== 00:14:52.765 Total : 256.00 0.12 1003277.57 1000230.60 1043522.46 00:14:52.765 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2738593 00:14:53.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2738593) - No such process 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2738593 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.027 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.027 rmmod nvme_tcp 00:14:53.027 rmmod nvme_fabrics 00:14:53.027 rmmod nvme_keyring 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2737798 ']' 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2737798 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 2737798 ']' 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 2737798 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2737798 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2737798' 00:14:53.289 killing process with pid 2737798 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 2737798 00:14:53.289 [2024-05-15 10:08:38.895477] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:53.289 10:08:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 2737798 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.289 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.290 10:08:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.840 10:08:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.840 00:14:55.840 real 0m17.869s 00:14:55.840 user 0m30.581s 00:14:55.840 sys 0m6.302s 00:14:55.840 10:08:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:55.840 10:08:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:55.840 ************************************ 00:14:55.840 END TEST nvmf_delete_subsystem 00:14:55.840 ************************************ 00:14:55.840 10:08:41 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:55.840 10:08:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:55.840 10:08:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:55.840 10:08:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.840 ************************************ 00:14:55.840 START TEST nvmf_ns_masking 00:14:55.840 ************************************ 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:55.840 * Looking for test storage... 00:14:55.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.840 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=e55463ef-9f9c-4175-b262-d30f0dfd3669 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.841 10:08:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:02.445 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:02.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.445 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:02.446 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:02.446 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.446 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:15:02.707 00:15:02.707 --- 10.0.0.2 ping statistics --- 00:15:02.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.707 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.485 ms 00:15:02.707 00:15:02.707 --- 10.0.0.1 ping statistics --- 00:15:02.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.707 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2743501 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2743501 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 2743501 ']' 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:02.707 10:08:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.969 [2024-05-15 10:08:48.540662] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:02.969 [2024-05-15 10:08:48.540741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.969 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.969 [2024-05-15 10:08:48.613638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.969 [2024-05-15 10:08:48.652189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.969 [2024-05-15 10:08:48.652236] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.969 [2024-05-15 10:08:48.652244] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.969 [2024-05-15 10:08:48.652250] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.969 [2024-05-15 10:08:48.652256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.969 [2024-05-15 10:08:48.652348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.969 [2024-05-15 10:08:48.652557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.969 [2024-05-15 10:08:48.652557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.969 [2024-05-15 10:08:48.652405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.541 10:08:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:03.541 10:08:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:15:03.541 10:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.541 10:08:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:03.541 10:08:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:03.803 10:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.803 10:08:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:03.803 [2024-05-15 10:08:49.498397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.803 10:08:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:03.803 10:08:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:03.803 10:08:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:04.065 Malloc1 00:15:04.065 10:08:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:04.326 Malloc2 00:15:04.326 10:08:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:04.326 10:08:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:04.587 10:08:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.587 [2024-05-15 10:08:50.338491] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:04.587 [2024-05-15 10:08:50.338764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.587 10:08:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:04.587 10:08:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e55463ef-9f9c-4175-b262-d30f0dfd3669 -a 10.0.0.2 -s 4420 -i 4 00:15:04.849 10:08:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:04.849 10:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:15:04.849 10:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.849 10:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:15:04.849 10:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:06.781 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.079 [ 0]:0x1 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9277ec9199ef4bc2b415e0e9f8c1b69f 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9277ec9199ef4bc2b415e0e9f8c1b69f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:07.079 [ 0]:0x1 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9277ec9199ef4bc2b415e0e9f8c1b69f 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9277ec9199ef4bc2b415e0e9f8c1b69f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:07.079 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:07.079 [ 1]:0x2 00:15:07.352 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:07.352 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:07.352 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:07.352 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.352 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:07.352 10:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.352 10:08:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.613 10:08:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:07.613 10:08:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:07.613 10:08:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e55463ef-9f9c-4175-b262-d30f0dfd3669 -a 10.0.0.2 -s 4420 -i 4 00:15:07.874 10:08:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:07.874 10:08:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:15:07.874 10:08:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.874 10:08:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:15:07.874 10:08:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:15:07.874 10:08:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:09.790 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:10.051 [ 0]:0x2 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.051 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.312 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:10.312 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.312 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.312 [ 0]:0x1 00:15:10.312 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.312 10:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9277ec9199ef4bc2b415e0e9f8c1b69f 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9277ec9199ef4bc2b415e0e9f8c1b69f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.312 [ 1]:0x2 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.312 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.573 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:10.833 [ 0]:0x2 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.833 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e55463ef-9f9c-4175-b262-d30f0dfd3669 -a 10.0.0.2 -s 4420 -i 4 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:15:11.094 10:08:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:13.643 10:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:13.643 [ 0]:0x1 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9277ec9199ef4bc2b415e0e9f8c1b69f 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9277ec9199ef4bc2b415e0e9f8c1b69f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.643 [ 1]:0x2 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:13.643 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:13.644 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:13.905 [ 0]:0x2 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:13.905 [2024-05-15 10:08:59.655472] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:13.905 request: 00:15:13.905 { 00:15:13.905 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.905 "nsid": 2, 00:15:13.905 "host": "nqn.2016-06.io.spdk:host1", 00:15:13.905 "method": "nvmf_ns_remove_host", 00:15:13.905 "req_id": 1 00:15:13.905 } 00:15:13.905 Got JSON-RPC error response 00:15:13.905 response: 00:15:13.905 { 00:15:13.905 "code": -32602, 00:15:13.905 "message": "Invalid parameters" 00:15:13.905 } 00:15:13.905 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:13.906 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:14.167 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.167 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.167 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:14.168 [ 0]:0x2 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=4add9125e0ce48f29b61ff626c8dfe0a 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 4add9125e0ce48f29b61ff626c8dfe0a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.168 10:08:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:14.429 rmmod nvme_tcp 00:15:14.429 rmmod nvme_fabrics 00:15:14.429 rmmod nvme_keyring 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2743501 ']' 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2743501 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 2743501 ']' 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 2743501 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2743501 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2743501' 00:15:14.429 killing process with pid 2743501 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 2743501 00:15:14.429 [2024-05-15 10:09:00.136108] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:14.429 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 2743501 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.691 10:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.606 10:09:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:16.606 00:15:16.606 real 0m21.184s 00:15:16.606 user 0m50.900s 00:15:16.606 sys 0m6.921s 00:15:16.606 10:09:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:16.606 10:09:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:16.606 ************************************ 00:15:16.606 END TEST nvmf_ns_masking 00:15:16.606 ************************************ 00:15:16.606 10:09:02 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:16.606 10:09:02 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:16.606 10:09:02 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:16.606 10:09:02 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:16.606 10:09:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:16.867 ************************************ 00:15:16.867 START TEST nvmf_nvme_cli 00:15:16.867 ************************************ 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:16.867 * Looking for test storage... 00:15:16.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.867 10:09:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:16.868 10:09:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.031 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.031 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.031 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:25.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:25.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:25.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.032 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:25.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:15:25.033 00:15:25.033 --- 10.0.0.2 ping statistics --- 00:15:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.033 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:15:25.033 00:15:25.033 --- 10.0.0.1 ping statistics --- 00:15:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.033 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2750577 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2750577 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # '[' -z 2750577 ']' 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:25.033 10:09:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.033 [2024-05-15 10:09:09.851591] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:25.033 [2024-05-15 10:09:09.851660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.033 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.033 [2024-05-15 10:09:09.923434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.033 [2024-05-15 10:09:09.962976] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.033 [2024-05-15 10:09:09.963021] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.033 [2024-05-15 10:09:09.963029] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.033 [2024-05-15 10:09:09.963036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.033 [2024-05-15 10:09:09.963042] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.033 [2024-05-15 10:09:09.963180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.033 [2024-05-15 10:09:09.963362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.033 [2024-05-15 10:09:09.963685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.033 [2024-05-15 10:09:09.963686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # return 0 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 [2024-05-15 10:09:10.688063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 Malloc0 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 Malloc1 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 [2024-05-15 10:09:10.777459] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:25.034 [2024-05-15 10:09:10.777698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.034 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:25.299 00:15:25.299 Discovery Log Number of Records 2, Generation counter 2 00:15:25.299 =====Discovery Log Entry 0====== 00:15:25.299 trtype: tcp 00:15:25.299 adrfam: ipv4 00:15:25.299 subtype: current discovery subsystem 00:15:25.299 treq: not required 00:15:25.299 portid: 0 00:15:25.299 trsvcid: 4420 00:15:25.299 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:25.299 traddr: 10.0.0.2 00:15:25.299 eflags: explicit discovery connections, duplicate discovery information 00:15:25.299 sectype: none 00:15:25.299 =====Discovery Log Entry 1====== 00:15:25.299 trtype: tcp 00:15:25.299 adrfam: ipv4 00:15:25.299 subtype: nvme subsystem 00:15:25.299 treq: not required 00:15:25.299 portid: 0 00:15:25.299 trsvcid: 4420 00:15:25.299 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:25.299 traddr: 10.0.0.2 00:15:25.299 eflags: none 00:15:25.299 sectype: none 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:25.299 10:09:10 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.694 10:09:12 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:26.694 10:09:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local i=0 00:15:26.694 10:09:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.694 10:09:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:15:26.694 10:09:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:15:26.694 10:09:12 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # sleep 2 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # return 0 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:29.246 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:29.247 /dev/nvme0n1 ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:29.247 10:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # local i=0 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # return 0 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.509 rmmod nvme_tcp 00:15:29.509 rmmod nvme_fabrics 00:15:29.509 rmmod nvme_keyring 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2750577 ']' 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2750577 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' -z 2750577 ']' 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # kill -0 2750577 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # uname 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2750577 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2750577' 00:15:29.509 killing process with pid 2750577 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # kill 2750577 00:15:29.509 [2024-05-15 10:09:15.218949] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:29.509 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # wait 2750577 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.772 10:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.689 10:09:17 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.689 00:15:31.689 real 0m15.008s 00:15:31.689 user 0m23.627s 00:15:31.689 sys 0m5.952s 00:15:31.689 10:09:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:31.689 10:09:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.689 ************************************ 00:15:31.689 END TEST nvmf_nvme_cli 00:15:31.689 ************************************ 00:15:31.689 10:09:17 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:31.689 10:09:17 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:31.689 10:09:17 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:31.689 10:09:17 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:31.689 10:09:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.954 ************************************ 00:15:31.954 START TEST nvmf_vfio_user 00:15:31.954 ************************************ 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:31.954 * Looking for test storage... 00:15:31.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.954 10:09:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2752376 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2752376' 00:15:31.955 Process pid: 2752376 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2752376 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2752376 ']' 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:31.955 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:31.955 [2024-05-15 10:09:17.717333] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:31.955 [2024-05-15 10:09:17.717382] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.955 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.249 [2024-05-15 10:09:17.780327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.249 [2024-05-15 10:09:17.812571] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.249 [2024-05-15 10:09:17.812613] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.249 [2024-05-15 10:09:17.812621] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.249 [2024-05-15 10:09:17.812628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.249 [2024-05-15 10:09:17.812633] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.249 [2024-05-15 10:09:17.812776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.249 [2024-05-15 10:09:17.812894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.249 [2024-05-15 10:09:17.813047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.249 [2024-05-15 10:09:17.813049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.249 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:32.249 10:09:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:15:32.249 10:09:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:33.195 10:09:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:33.457 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:33.457 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:33.457 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.457 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:33.457 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:33.457 Malloc1 00:15:33.719 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:33.719 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:33.980 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:33.980 [2024-05-15 10:09:19.746782] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:34.241 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.241 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:34.242 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:34.242 Malloc2 00:15:34.242 10:09:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:34.503 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:34.503 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:34.765 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:34.765 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:34.765 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.765 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:34.765 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:34.765 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:34.765 [2024-05-15 10:09:20.480009] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:34.765 [2024-05-15 10:09:20.480053] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752769 ] 00:15:34.765 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.765 [2024-05-15 10:09:20.517950] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:34.765 [2024-05-15 10:09:20.520255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:34.765 [2024-05-15 10:09:20.520274] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f192095b000 00:15:34.765 [2024-05-15 10:09:20.521252] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.522251] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.523263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.524267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.525276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.526281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.527276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.528288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:34.765 [2024-05-15 10:09:20.529299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:34.765 [2024-05-15 10:09:20.529310] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f191f71f000 00:15:34.765 [2024-05-15 10:09:20.530640] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:34.765 [2024-05-15 10:09:20.551572] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:34.765 [2024-05-15 10:09:20.551596] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:34.765 [2024-05-15 10:09:20.554426] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:34.765 [2024-05-15 10:09:20.554469] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:34.765 [2024-05-15 10:09:20.554560] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:34.765 [2024-05-15 10:09:20.554577] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:34.765 [2024-05-15 10:09:20.554584] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:34.765 [2024-05-15 10:09:20.555425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:34.765 [2024-05-15 10:09:20.555435] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:34.765 [2024-05-15 10:09:20.555442] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:34.765 [2024-05-15 10:09:20.556433] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:34.765 [2024-05-15 10:09:20.556441] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:34.765 [2024-05-15 10:09:20.556448] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:34.765 [2024-05-15 10:09:20.557436] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:34.765 [2024-05-15 10:09:20.557444] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:34.765 [2024-05-15 10:09:20.558445] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:34.765 [2024-05-15 10:09:20.558453] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:34.765 [2024-05-15 10:09:20.558458] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:34.765 [2024-05-15 10:09:20.558465] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:34.765 [2024-05-15 10:09:20.558570] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:34.765 [2024-05-15 10:09:20.558576] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:34.765 [2024-05-15 10:09:20.558581] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:34.765 [2024-05-15 10:09:20.559446] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:35.030 [2024-05-15 10:09:20.560452] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:35.030 [2024-05-15 10:09:20.561461] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:35.030 [2024-05-15 10:09:20.562459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.030 [2024-05-15 10:09:20.562511] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:35.030 [2024-05-15 10:09:20.563466] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:35.030 [2024-05-15 10:09:20.563474] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:35.030 [2024-05-15 10:09:20.563479] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563501] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:35.030 [2024-05-15 10:09:20.563511] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563528] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.030 [2024-05-15 10:09:20.563533] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.030 [2024-05-15 10:09:20.563547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.030 [2024-05-15 10:09:20.563595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:35.030 [2024-05-15 10:09:20.563604] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:35.030 [2024-05-15 10:09:20.563609] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:35.030 [2024-05-15 10:09:20.563613] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:35.030 [2024-05-15 10:09:20.563617] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:35.030 [2024-05-15 10:09:20.563622] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:35.030 [2024-05-15 10:09:20.563627] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:35.030 [2024-05-15 10:09:20.563631] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563641] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:35.030 [2024-05-15 10:09:20.563664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:35.030 [2024-05-15 10:09:20.563676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.030 [2024-05-15 10:09:20.563685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.030 [2024-05-15 10:09:20.563693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.030 [2024-05-15 10:09:20.563701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.030 [2024-05-15 10:09:20.563705] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563712] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:35.030 [2024-05-15 10:09:20.563734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:35.030 [2024-05-15 10:09:20.563739] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:35.030 [2024-05-15 10:09:20.563748] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563755] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:35.030 [2024-05-15 10:09:20.563762] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.563784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.563833] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563841] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563848] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:35.031 [2024-05-15 10:09:20.563852] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:35.031 [2024-05-15 10:09:20.563858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.563867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.563878] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:35.031 [2024-05-15 10:09:20.563890] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563898] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563904] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.031 [2024-05-15 10:09:20.563908] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.031 [2024-05-15 10:09:20.563914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.563931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.563941] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563948] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.031 [2024-05-15 10:09:20.563959] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.031 [2024-05-15 10:09:20.563965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.563974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.563983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563990] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.563997] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.564004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.564010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.564014] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:35.031 [2024-05-15 10:09:20.564019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:35.031 [2024-05-15 10:09:20.564024] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:35.031 [2024-05-15 10:09:20.564044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564123] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:35.031 [2024-05-15 10:09:20.564128] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:35.031 [2024-05-15 10:09:20.564131] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:35.031 [2024-05-15 10:09:20.564135] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:35.031 [2024-05-15 10:09:20.564141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:35.031 [2024-05-15 10:09:20.564148] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:35.031 [2024-05-15 10:09:20.564152] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:35.031 [2024-05-15 10:09:20.564158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564165] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:35.031 [2024-05-15 10:09:20.564169] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.031 [2024-05-15 10:09:20.564175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564184] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:35.031 [2024-05-15 10:09:20.564188] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:35.031 [2024-05-15 10:09:20.564194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:35.031 [2024-05-15 10:09:20.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:35.031 [2024-05-15 10:09:20.564232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:35.031 ===================================================== 00:15:35.031 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:35.031 ===================================================== 00:15:35.031 Controller Capabilities/Features 00:15:35.031 ================================ 00:15:35.031 Vendor ID: 4e58 00:15:35.031 Subsystem Vendor ID: 4e58 00:15:35.031 Serial Number: SPDK1 00:15:35.031 Model Number: SPDK bdev Controller 00:15:35.031 Firmware Version: 24.05 00:15:35.031 Recommended Arb Burst: 6 00:15:35.031 IEEE OUI Identifier: 8d 6b 50 00:15:35.031 Multi-path I/O 00:15:35.031 May have multiple subsystem ports: Yes 00:15:35.031 May have multiple controllers: Yes 00:15:35.031 Associated with SR-IOV VF: No 00:15:35.031 Max Data Transfer Size: 131072 00:15:35.031 Max Number of Namespaces: 32 00:15:35.031 Max Number of I/O Queues: 127 00:15:35.031 NVMe Specification Version (VS): 1.3 00:15:35.031 NVMe Specification Version (Identify): 1.3 00:15:35.031 Maximum Queue Entries: 256 00:15:35.031 Contiguous Queues Required: Yes 00:15:35.031 Arbitration Mechanisms Supported 00:15:35.031 Weighted Round Robin: Not Supported 00:15:35.031 Vendor Specific: Not Supported 00:15:35.031 Reset Timeout: 15000 ms 00:15:35.031 Doorbell Stride: 4 bytes 00:15:35.031 NVM Subsystem Reset: Not Supported 00:15:35.031 Command Sets Supported 00:15:35.031 NVM Command Set: Supported 00:15:35.031 Boot Partition: Not Supported 00:15:35.031 Memory Page Size Minimum: 4096 bytes 00:15:35.031 Memory Page Size Maximum: 4096 bytes 00:15:35.031 Persistent Memory Region: Not Supported 00:15:35.031 Optional Asynchronous Events Supported 00:15:35.031 Namespace Attribute Notices: Supported 00:15:35.031 Firmware Activation Notices: Not Supported 00:15:35.031 ANA Change Notices: Not Supported 00:15:35.031 PLE Aggregate Log Change Notices: Not Supported 00:15:35.031 LBA Status Info Alert Notices: Not Supported 00:15:35.031 EGE Aggregate Log Change Notices: Not Supported 00:15:35.031 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.031 Zone Descriptor Change Notices: Not Supported 00:15:35.031 Discovery Log Change Notices: Not Supported 00:15:35.031 Controller Attributes 00:15:35.031 128-bit Host Identifier: Supported 00:15:35.031 Non-Operational Permissive Mode: Not Supported 00:15:35.031 NVM Sets: Not Supported 00:15:35.031 Read Recovery Levels: Not Supported 00:15:35.031 Endurance Groups: Not Supported 00:15:35.031 Predictable Latency Mode: Not Supported 00:15:35.031 Traffic Based Keep ALive: Not Supported 00:15:35.031 Namespace Granularity: Not Supported 00:15:35.031 SQ Associations: Not Supported 00:15:35.031 UUID List: Not Supported 00:15:35.031 Multi-Domain Subsystem: Not Supported 00:15:35.031 Fixed Capacity Management: Not Supported 00:15:35.031 Variable Capacity Management: Not Supported 00:15:35.032 Delete Endurance Group: Not Supported 00:15:35.032 Delete NVM Set: Not Supported 00:15:35.032 Extended LBA Formats Supported: Not Supported 00:15:35.032 Flexible Data Placement Supported: Not Supported 00:15:35.032 00:15:35.032 Controller Memory Buffer Support 00:15:35.032 ================================ 00:15:35.032 Supported: No 00:15:35.032 00:15:35.032 Persistent Memory Region Support 00:15:35.032 ================================ 00:15:35.032 Supported: No 00:15:35.032 00:15:35.032 Admin Command Set Attributes 00:15:35.032 ============================ 00:15:35.032 Security Send/Receive: Not Supported 00:15:35.032 Format NVM: Not Supported 00:15:35.032 Firmware Activate/Download: Not Supported 00:15:35.032 Namespace Management: Not Supported 00:15:35.032 Device Self-Test: Not Supported 00:15:35.032 Directives: Not Supported 00:15:35.032 NVMe-MI: Not Supported 00:15:35.032 Virtualization Management: Not Supported 00:15:35.032 Doorbell Buffer Config: Not Supported 00:15:35.032 Get LBA Status Capability: Not Supported 00:15:35.032 Command & Feature Lockdown Capability: Not Supported 00:15:35.032 Abort Command Limit: 4 00:15:35.032 Async Event Request Limit: 4 00:15:35.032 Number of Firmware Slots: N/A 00:15:35.032 Firmware Slot 1 Read-Only: N/A 00:15:35.032 Firmware Activation Without Reset: N/A 00:15:35.032 Multiple Update Detection Support: N/A 00:15:35.032 Firmware Update Granularity: No Information Provided 00:15:35.032 Per-Namespace SMART Log: No 00:15:35.032 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.032 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:35.032 Command Effects Log Page: Supported 00:15:35.032 Get Log Page Extended Data: Supported 00:15:35.032 Telemetry Log Pages: Not Supported 00:15:35.032 Persistent Event Log Pages: Not Supported 00:15:35.032 Supported Log Pages Log Page: May Support 00:15:35.032 Commands Supported & Effects Log Page: Not Supported 00:15:35.032 Feature Identifiers & Effects Log Page:May Support 00:15:35.032 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.032 Data Area 4 for Telemetry Log: Not Supported 00:15:35.032 Error Log Page Entries Supported: 128 00:15:35.032 Keep Alive: Supported 00:15:35.032 Keep Alive Granularity: 10000 ms 00:15:35.032 00:15:35.032 NVM Command Set Attributes 00:15:35.032 ========================== 00:15:35.032 Submission Queue Entry Size 00:15:35.032 Max: 64 00:15:35.032 Min: 64 00:15:35.032 Completion Queue Entry Size 00:15:35.032 Max: 16 00:15:35.032 Min: 16 00:15:35.032 Number of Namespaces: 32 00:15:35.032 Compare Command: Supported 00:15:35.032 Write Uncorrectable Command: Not Supported 00:15:35.032 Dataset Management Command: Supported 00:15:35.032 Write Zeroes Command: Supported 00:15:35.032 Set Features Save Field: Not Supported 00:15:35.032 Reservations: Not Supported 00:15:35.032 Timestamp: Not Supported 00:15:35.032 Copy: Supported 00:15:35.032 Volatile Write Cache: Present 00:15:35.032 Atomic Write Unit (Normal): 1 00:15:35.032 Atomic Write Unit (PFail): 1 00:15:35.032 Atomic Compare & Write Unit: 1 00:15:35.032 Fused Compare & Write: Supported 00:15:35.032 Scatter-Gather List 00:15:35.032 SGL Command Set: Supported (Dword aligned) 00:15:35.032 SGL Keyed: Not Supported 00:15:35.032 SGL Bit Bucket Descriptor: Not Supported 00:15:35.032 SGL Metadata Pointer: Not Supported 00:15:35.032 Oversized SGL: Not Supported 00:15:35.032 SGL Metadata Address: Not Supported 00:15:35.032 SGL Offset: Not Supported 00:15:35.032 Transport SGL Data Block: Not Supported 00:15:35.032 Replay Protected Memory Block: Not Supported 00:15:35.032 00:15:35.032 Firmware Slot Information 00:15:35.032 ========================= 00:15:35.032 Active slot: 1 00:15:35.032 Slot 1 Firmware Revision: 24.05 00:15:35.032 00:15:35.032 00:15:35.032 Commands Supported and Effects 00:15:35.032 ============================== 00:15:35.032 Admin Commands 00:15:35.032 -------------- 00:15:35.032 Get Log Page (02h): Supported 00:15:35.032 Identify (06h): Supported 00:15:35.032 Abort (08h): Supported 00:15:35.032 Set Features (09h): Supported 00:15:35.032 Get Features (0Ah): Supported 00:15:35.032 Asynchronous Event Request (0Ch): Supported 00:15:35.032 Keep Alive (18h): Supported 00:15:35.032 I/O Commands 00:15:35.032 ------------ 00:15:35.032 Flush (00h): Supported LBA-Change 00:15:35.032 Write (01h): Supported LBA-Change 00:15:35.032 Read (02h): Supported 00:15:35.032 Compare (05h): Supported 00:15:35.032 Write Zeroes (08h): Supported LBA-Change 00:15:35.032 Dataset Management (09h): Supported LBA-Change 00:15:35.032 Copy (19h): Supported LBA-Change 00:15:35.032 Unknown (79h): Supported LBA-Change 00:15:35.032 Unknown (7Ah): Supported 00:15:35.032 00:15:35.032 Error Log 00:15:35.032 ========= 00:15:35.032 00:15:35.032 Arbitration 00:15:35.032 =========== 00:15:35.032 Arbitration Burst: 1 00:15:35.032 00:15:35.032 Power Management 00:15:35.032 ================ 00:15:35.032 Number of Power States: 1 00:15:35.032 Current Power State: Power State #0 00:15:35.032 Power State #0: 00:15:35.032 Max Power: 0.00 W 00:15:35.032 Non-Operational State: Operational 00:15:35.032 Entry Latency: Not Reported 00:15:35.032 Exit Latency: Not Reported 00:15:35.032 Relative Read Throughput: 0 00:15:35.032 Relative Read Latency: 0 00:15:35.032 Relative Write Throughput: 0 00:15:35.032 Relative Write Latency: 0 00:15:35.032 Idle Power: Not Reported 00:15:35.032 Active Power: Not Reported 00:15:35.032 Non-Operational Permissive Mode: Not Supported 00:15:35.032 00:15:35.032 Health Information 00:15:35.032 ================== 00:15:35.032 Critical Warnings: 00:15:35.032 Available Spare Space: OK 00:15:35.032 Temperature: OK 00:15:35.032 Device Reliability: OK 00:15:35.032 Read Only: No 00:15:35.032 Volatile Memory Backup: OK 00:15:35.032 Current Temperature: 0 Kelvin (-2[2024-05-15 10:09:20.564341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:35.032 [2024-05-15 10:09:20.564350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:35.032 [2024-05-15 10:09:20.564377] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:35.032 [2024-05-15 10:09:20.564386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.032 [2024-05-15 10:09:20.564392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.032 [2024-05-15 10:09:20.564398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.032 [2024-05-15 10:09:20.564404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.032 [2024-05-15 10:09:20.564477] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:35.032 [2024-05-15 10:09:20.564486] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:35.032 [2024-05-15 10:09:20.565475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.032 [2024-05-15 10:09:20.565516] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:35.032 [2024-05-15 10:09:20.565522] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:35.032 [2024-05-15 10:09:20.566487] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:35.032 [2024-05-15 10:09:20.566499] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:35.032 [2024-05-15 10:09:20.566566] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:35.032 [2024-05-15 10:09:20.571301] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:35.032 73 Celsius) 00:15:35.032 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:35.032 Available Spare: 0% 00:15:35.032 Available Spare Threshold: 0% 00:15:35.032 Life Percentage Used: 0% 00:15:35.032 Data Units Read: 0 00:15:35.032 Data Units Written: 0 00:15:35.032 Host Read Commands: 0 00:15:35.032 Host Write Commands: 0 00:15:35.032 Controller Busy Time: 0 minutes 00:15:35.032 Power Cycles: 0 00:15:35.032 Power On Hours: 0 hours 00:15:35.032 Unsafe Shutdowns: 0 00:15:35.032 Unrecoverable Media Errors: 0 00:15:35.032 Lifetime Error Log Entries: 0 00:15:35.032 Warning Temperature Time: 0 minutes 00:15:35.032 Critical Temperature Time: 0 minutes 00:15:35.032 00:15:35.032 Number of Queues 00:15:35.032 ================ 00:15:35.032 Number of I/O Submission Queues: 127 00:15:35.032 Number of I/O Completion Queues: 127 00:15:35.032 00:15:35.032 Active Namespaces 00:15:35.032 ================= 00:15:35.032 Namespace ID:1 00:15:35.032 Error Recovery Timeout: Unlimited 00:15:35.032 Command Set Identifier: NVM (00h) 00:15:35.032 Deallocate: Supported 00:15:35.033 Deallocated/Unwritten Error: Not Supported 00:15:35.033 Deallocated Read Value: Unknown 00:15:35.033 Deallocate in Write Zeroes: Not Supported 00:15:35.033 Deallocated Guard Field: 0xFFFF 00:15:35.033 Flush: Supported 00:15:35.033 Reservation: Supported 00:15:35.033 Namespace Sharing Capabilities: Multiple Controllers 00:15:35.033 Size (in LBAs): 131072 (0GiB) 00:15:35.033 Capacity (in LBAs): 131072 (0GiB) 00:15:35.033 Utilization (in LBAs): 131072 (0GiB) 00:15:35.033 NGUID: D185709D41D44AA485E9D88F215D6F8F 00:15:35.033 UUID: d185709d-41d4-4aa4-85e9-d88f215d6f8f 00:15:35.033 Thin Provisioning: Not Supported 00:15:35.033 Per-NS Atomic Units: Yes 00:15:35.033 Atomic Boundary Size (Normal): 0 00:15:35.033 Atomic Boundary Size (PFail): 0 00:15:35.033 Atomic Boundary Offset: 0 00:15:35.033 Maximum Single Source Range Length: 65535 00:15:35.033 Maximum Copy Length: 65535 00:15:35.033 Maximum Source Range Count: 1 00:15:35.033 NGUID/EUI64 Never Reused: No 00:15:35.033 Namespace Write Protected: No 00:15:35.033 Number of LBA Formats: 1 00:15:35.033 Current LBA Format: LBA Format #00 00:15:35.033 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.033 00:15:35.033 10:09:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:35.033 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.033 [2024-05-15 10:09:20.754921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.331 Initializing NVMe Controllers 00:15:40.331 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:40.331 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:40.331 Initialization complete. Launching workers. 00:15:40.331 ======================================================== 00:15:40.331 Latency(us) 00:15:40.331 Device Information : IOPS MiB/s Average min max 00:15:40.331 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39948.96 156.05 3203.96 837.20 6836.05 00:15:40.331 ======================================================== 00:15:40.331 Total : 39948.96 156.05 3203.96 837.20 6836.05 00:15:40.331 00:15:40.331 [2024-05-15 10:09:25.776861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.331 10:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:40.331 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.331 [2024-05-15 10:09:25.948710] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.629 Initializing NVMe Controllers 00:15:45.629 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.629 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:45.629 Initialization complete. Launching workers. 00:15:45.629 ======================================================== 00:15:45.629 Latency(us) 00:15:45.629 Device Information : IOPS MiB/s Average min max 00:15:45.629 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7996.65 4983.23 14967.23 00:15:45.629 ======================================================== 00:15:45.629 Total : 16025.60 62.60 7996.65 4983.23 14967.23 00:15:45.629 00:15:45.629 [2024-05-15 10:09:30.987407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.629 10:09:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:45.629 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.629 [2024-05-15 10:09:31.172306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.924 [2024-05-15 10:09:36.267638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.925 Initializing NVMe Controllers 00:15:50.925 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.925 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.925 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:50.925 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:50.925 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:50.925 Initialization complete. Launching workers. 00:15:50.925 Starting thread on core 2 00:15:50.925 Starting thread on core 3 00:15:50.925 Starting thread on core 1 00:15:50.925 10:09:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:50.925 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.925 [2024-05-15 10:09:36.521809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.231 [2024-05-15 10:09:39.591425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.231 Initializing NVMe Controllers 00:15:54.231 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.231 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.231 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:54.231 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:54.231 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:54.231 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:54.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:54.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:54.231 Initialization complete. Launching workers. 00:15:54.231 Starting thread on core 1 with urgent priority queue 00:15:54.231 Starting thread on core 2 with urgent priority queue 00:15:54.231 Starting thread on core 3 with urgent priority queue 00:15:54.231 Starting thread on core 0 with urgent priority queue 00:15:54.231 SPDK bdev Controller (SPDK1 ) core 0: 11506.67 IO/s 8.69 secs/100000 ios 00:15:54.231 SPDK bdev Controller (SPDK1 ) core 1: 11752.33 IO/s 8.51 secs/100000 ios 00:15:54.231 SPDK bdev Controller (SPDK1 ) core 2: 10279.67 IO/s 9.73 secs/100000 ios 00:15:54.231 SPDK bdev Controller (SPDK1 ) core 3: 13745.33 IO/s 7.28 secs/100000 ios 00:15:54.231 ======================================================== 00:15:54.231 00:15:54.231 10:09:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:54.231 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.231 [2024-05-15 10:09:39.855920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:54.231 Initializing NVMe Controllers 00:15:54.231 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.231 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:54.231 Namespace ID: 1 size: 0GB 00:15:54.231 Initialization complete. 00:15:54.231 INFO: using host memory buffer for IO 00:15:54.231 Hello world! 00:15:54.231 [2024-05-15 10:09:39.888093] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:54.231 10:09:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:54.231 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.493 [2024-05-15 10:09:40.146758] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.439 Initializing NVMe Controllers 00:15:55.439 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.439 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.439 Initialization complete. Launching workers. 00:15:55.439 submit (in ns) avg, min, max = 8845.9, 3929.2, 4000802.5 00:15:55.439 complete (in ns) avg, min, max = 17958.0, 2392.5, 5995410.0 00:15:55.439 00:15:55.439 Submit histogram 00:15:55.439 ================ 00:15:55.439 Range in us Cumulative Count 00:15:55.439 3.920 - 3.947: 1.0784% ( 208) 00:15:55.439 3.947 - 3.973: 6.3099% ( 1009) 00:15:55.439 3.973 - 4.000: 16.9181% ( 2046) 00:15:55.439 4.000 - 4.027: 29.2788% ( 2384) 00:15:55.439 4.027 - 4.053: 39.9855% ( 2065) 00:15:55.439 4.053 - 4.080: 51.4958% ( 2220) 00:15:55.439 4.080 - 4.107: 66.6770% ( 2928) 00:15:55.439 4.107 - 4.133: 81.5368% ( 2866) 00:15:55.439 4.133 - 4.160: 92.3213% ( 2080) 00:15:55.439 4.160 - 4.187: 96.9980% ( 902) 00:15:55.439 4.187 - 4.213: 98.5586% ( 301) 00:15:55.439 4.213 - 4.240: 99.2845% ( 140) 00:15:55.439 4.240 - 4.267: 99.4400% ( 30) 00:15:55.439 4.267 - 4.293: 99.4815% ( 8) 00:15:55.439 4.293 - 4.320: 99.5023% ( 4) 00:15:55.439 4.320 - 4.347: 99.5126% ( 2) 00:15:55.439 4.400 - 4.427: 99.5178% ( 1) 00:15:55.439 4.560 - 4.587: 99.5282% ( 2) 00:15:55.439 4.587 - 4.613: 99.5334% ( 1) 00:15:55.439 4.613 - 4.640: 99.5385% ( 1) 00:15:55.439 4.667 - 4.693: 99.5437% ( 1) 00:15:55.439 4.880 - 4.907: 99.5541% ( 2) 00:15:55.439 5.280 - 5.307: 99.5593% ( 1) 00:15:55.439 5.360 - 5.387: 99.5645% ( 1) 00:15:55.439 5.547 - 5.573: 99.5697% ( 1) 00:15:55.439 5.573 - 5.600: 99.5748% ( 1) 00:15:55.439 5.707 - 5.733: 99.5800% ( 1) 00:15:55.439 5.920 - 5.947: 99.5852% ( 1) 00:15:55.439 6.027 - 6.053: 99.5956% ( 2) 00:15:55.439 6.107 - 6.133: 99.6060% ( 2) 00:15:55.439 6.187 - 6.213: 99.6111% ( 1) 00:15:55.439 6.213 - 6.240: 99.6163% ( 1) 00:15:55.439 6.267 - 6.293: 99.6215% ( 1) 00:15:55.439 6.293 - 6.320: 99.6267% ( 1) 00:15:55.439 6.480 - 6.507: 99.6319% ( 1) 00:15:55.439 6.720 - 6.747: 99.6371% ( 1) 00:15:55.439 6.773 - 6.800: 99.6422% ( 1) 00:15:55.439 6.827 - 6.880: 99.6526% ( 2) 00:15:55.439 6.987 - 7.040: 99.6630% ( 2) 00:15:55.439 7.040 - 7.093: 99.6682% ( 1) 00:15:55.439 7.093 - 7.147: 99.6734% ( 1) 00:15:55.439 7.253 - 7.307: 99.6785% ( 1) 00:15:55.439 7.307 - 7.360: 99.6889% ( 2) 00:15:55.439 7.360 - 7.413: 99.7045% ( 3) 00:15:55.439 7.413 - 7.467: 99.7148% ( 2) 00:15:55.439 7.520 - 7.573: 99.7200% ( 1) 00:15:55.439 7.573 - 7.627: 99.7304% ( 2) 00:15:55.439 7.627 - 7.680: 99.7408% ( 2) 00:15:55.439 7.680 - 7.733: 99.7719% ( 6) 00:15:55.439 7.733 - 7.787: 99.7822% ( 2) 00:15:55.439 7.787 - 7.840: 99.7926% ( 2) 00:15:55.439 7.840 - 7.893: 99.8030% ( 2) 00:15:55.439 7.893 - 7.947: 99.8133% ( 2) 00:15:55.439 7.947 - 8.000: 99.8185% ( 1) 00:15:55.439 8.000 - 8.053: 99.8289% ( 2) 00:15:55.439 8.107 - 8.160: 99.8341% ( 1) 00:15:55.439 8.320 - 8.373: 99.8393% ( 1) 00:15:55.439 8.427 - 8.480: 99.8445% ( 1) 00:15:55.439 8.533 - 8.587: 99.8496% ( 1) 00:15:55.439 8.747 - 8.800: 99.8548% ( 1) 00:15:55.439 8.800 - 8.853: 99.8652% ( 2) 00:15:55.439 9.227 - 9.280: 99.8704% ( 1) 00:15:55.439 9.440 - 9.493: 99.8756% ( 1) 00:15:55.439 10.400 - 10.453: 99.8807% ( 1) 00:15:55.439 3986.773 - 4014.080: 100.0000% ( 23) 00:15:55.439 00:15:55.439 Complete histogram 00:15:55.439 ================== 00:15:55.439 Range in us Cumulative Count 00:15:55.439 2.387 - 2.400: 0.0104% ( 2) 00:15:55.439 2.400 - 2.413: 0.0726% ( 12) 00:15:55.439 2.413 - 2.427: 0.9799% ( 175) 00:15:55.439 2.427 - 2.440: 1.0629% ( 16) 00:15:55.439 2.440 - 2.453: 1.2703% ( 40) 00:15:55.439 2.453 - 2.467: 1.3221% ( 10) 00:15:55.439 2.467 - 2.480: 2.9865% ( 321) 00:15:55.439 2.480 - 2.493: 46.4665% ( 8386) 00:15:55.439 2.493 - 2.507: 58.1635% ( 2256) 00:15:55.439 2.507 - 2.520: 72.4685% ( 2759) 00:15:55.439 2.520 - 2.533: 79.5562% ( 1367) 00:15:55.439 2.533 - 2.547: 81.7131% ( 416) 00:15:55.439 2.547 - [2024-05-15 10:09:41.166465] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.440 2.560: 85.7054% ( 770) 00:15:55.440 2.560 - 2.573: 91.4658% ( 1111) 00:15:55.440 2.573 - 2.587: 95.1003% ( 701) 00:15:55.440 2.587 - 2.600: 97.4646% ( 456) 00:15:55.440 2.600 - 2.613: 98.7556% ( 249) 00:15:55.440 2.613 - 2.627: 99.1652% ( 79) 00:15:55.440 2.627 - 2.640: 99.2793% ( 22) 00:15:55.440 2.640 - 2.653: 99.3052% ( 5) 00:15:55.440 2.653 - 2.667: 99.3260% ( 4) 00:15:55.440 4.507 - 4.533: 99.3312% ( 1) 00:15:55.440 4.720 - 4.747: 99.3363% ( 1) 00:15:55.440 4.747 - 4.773: 99.3415% ( 1) 00:15:55.440 4.800 - 4.827: 99.3467% ( 1) 00:15:55.440 4.827 - 4.853: 99.3519% ( 1) 00:15:55.440 4.907 - 4.933: 99.3571% ( 1) 00:15:55.440 4.960 - 4.987: 99.3623% ( 1) 00:15:55.440 4.987 - 5.013: 99.3726% ( 2) 00:15:55.440 5.253 - 5.280: 99.3778% ( 1) 00:15:55.440 5.280 - 5.307: 99.3882% ( 2) 00:15:55.440 5.360 - 5.387: 99.3934% ( 1) 00:15:55.440 5.573 - 5.600: 99.3986% ( 1) 00:15:55.440 5.627 - 5.653: 99.4089% ( 2) 00:15:55.440 5.680 - 5.707: 99.4193% ( 2) 00:15:55.440 5.707 - 5.733: 99.4297% ( 2) 00:15:55.440 5.733 - 5.760: 99.4400% ( 2) 00:15:55.440 5.760 - 5.787: 99.4504% ( 2) 00:15:55.440 5.840 - 5.867: 99.4660% ( 3) 00:15:55.440 5.867 - 5.893: 99.4711% ( 1) 00:15:55.440 6.000 - 6.027: 99.4763% ( 1) 00:15:55.440 6.027 - 6.053: 99.4815% ( 1) 00:15:55.440 6.107 - 6.133: 99.4971% ( 3) 00:15:55.440 6.160 - 6.187: 99.5023% ( 1) 00:15:55.440 6.187 - 6.213: 99.5074% ( 1) 00:15:55.440 6.240 - 6.267: 99.5178% ( 2) 00:15:55.440 6.267 - 6.293: 99.5230% ( 1) 00:15:55.440 6.347 - 6.373: 99.5334% ( 2) 00:15:55.440 6.373 - 6.400: 99.5385% ( 1) 00:15:55.440 6.427 - 6.453: 99.5437% ( 1) 00:15:55.440 6.507 - 6.533: 99.5489% ( 1) 00:15:55.440 6.640 - 6.667: 99.5593% ( 2) 00:15:55.440 6.693 - 6.720: 99.5645% ( 1) 00:15:55.440 6.747 - 6.773: 99.5697% ( 1) 00:15:55.440 6.800 - 6.827: 99.5748% ( 1) 00:15:55.440 7.093 - 7.147: 99.5800% ( 1) 00:15:55.440 7.200 - 7.253: 99.5852% ( 1) 00:15:55.440 7.307 - 7.360: 99.5904% ( 1) 00:15:55.440 10.827 - 10.880: 99.5956% ( 1) 00:15:55.440 11.147 - 11.200: 99.6008% ( 1) 00:15:55.440 12.267 - 12.320: 99.6060% ( 1) 00:15:55.440 12.533 - 12.587: 99.6111% ( 1) 00:15:55.440 2020.693 - 2034.347: 99.6163% ( 1) 00:15:55.440 2034.347 - 2048.000: 99.6215% ( 1) 00:15:55.440 2048.000 - 2061.653: 99.6267% ( 1) 00:15:55.440 3986.773 - 4014.080: 99.9896% ( 70) 00:15:55.440 5980.160 - 6007.467: 100.0000% ( 2) 00:15:55.440 00:15:55.440 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:55.440 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:55.440 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:55.440 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:55.440 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:55.737 [ 00:15:55.737 { 00:15:55.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:55.737 "subtype": "Discovery", 00:15:55.737 "listen_addresses": [], 00:15:55.737 "allow_any_host": true, 00:15:55.737 "hosts": [] 00:15:55.737 }, 00:15:55.737 { 00:15:55.738 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:55.738 "subtype": "NVMe", 00:15:55.738 "listen_addresses": [ 00:15:55.738 { 00:15:55.738 "trtype": "VFIOUSER", 00:15:55.738 "adrfam": "IPv4", 00:15:55.738 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:55.738 "trsvcid": "0" 00:15:55.738 } 00:15:55.738 ], 00:15:55.738 "allow_any_host": true, 00:15:55.738 "hosts": [], 00:15:55.738 "serial_number": "SPDK1", 00:15:55.738 "model_number": "SPDK bdev Controller", 00:15:55.738 "max_namespaces": 32, 00:15:55.738 "min_cntlid": 1, 00:15:55.738 "max_cntlid": 65519, 00:15:55.738 "namespaces": [ 00:15:55.738 { 00:15:55.738 "nsid": 1, 00:15:55.738 "bdev_name": "Malloc1", 00:15:55.738 "name": "Malloc1", 00:15:55.738 "nguid": "D185709D41D44AA485E9D88F215D6F8F", 00:15:55.738 "uuid": "d185709d-41d4-4aa4-85e9-d88f215d6f8f" 00:15:55.738 } 00:15:55.738 ] 00:15:55.738 }, 00:15:55.738 { 00:15:55.738 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:55.738 "subtype": "NVMe", 00:15:55.738 "listen_addresses": [ 00:15:55.738 { 00:15:55.738 "trtype": "VFIOUSER", 00:15:55.738 "adrfam": "IPv4", 00:15:55.738 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:55.738 "trsvcid": "0" 00:15:55.738 } 00:15:55.738 ], 00:15:55.738 "allow_any_host": true, 00:15:55.738 "hosts": [], 00:15:55.738 "serial_number": "SPDK2", 00:15:55.738 "model_number": "SPDK bdev Controller", 00:15:55.738 "max_namespaces": 32, 00:15:55.738 "min_cntlid": 1, 00:15:55.738 "max_cntlid": 65519, 00:15:55.738 "namespaces": [ 00:15:55.738 { 00:15:55.738 "nsid": 1, 00:15:55.738 "bdev_name": "Malloc2", 00:15:55.738 "name": "Malloc2", 00:15:55.738 "nguid": "34034144698F4374A1D0FC584A9EE089", 00:15:55.738 "uuid": "34034144-698f-4374-a1d0-fc584a9ee089" 00:15:55.738 } 00:15:55.738 ] 00:15:55.738 } 00:15:55.738 ] 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2756858 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:55.738 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:55.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.033 Malloc3 00:15:56.033 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:56.033 [2024-05-15 10:09:41.553727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.033 [2024-05-15 10:09:41.705799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.033 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:56.033 Asynchronous Event Request test 00:15:56.033 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.033 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.033 Registering asynchronous event callbacks... 00:15:56.033 Starting namespace attribute notice tests for all controllers... 00:15:56.033 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:56.033 aer_cb - Changed Namespace 00:15:56.033 Cleaning up... 00:15:56.296 [ 00:15:56.296 { 00:15:56.296 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:56.296 "subtype": "Discovery", 00:15:56.296 "listen_addresses": [], 00:15:56.296 "allow_any_host": true, 00:15:56.296 "hosts": [] 00:15:56.296 }, 00:15:56.296 { 00:15:56.296 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:56.296 "subtype": "NVMe", 00:15:56.296 "listen_addresses": [ 00:15:56.296 { 00:15:56.296 "trtype": "VFIOUSER", 00:15:56.296 "adrfam": "IPv4", 00:15:56.296 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:56.296 "trsvcid": "0" 00:15:56.296 } 00:15:56.296 ], 00:15:56.296 "allow_any_host": true, 00:15:56.296 "hosts": [], 00:15:56.296 "serial_number": "SPDK1", 00:15:56.296 "model_number": "SPDK bdev Controller", 00:15:56.296 "max_namespaces": 32, 00:15:56.296 "min_cntlid": 1, 00:15:56.296 "max_cntlid": 65519, 00:15:56.296 "namespaces": [ 00:15:56.296 { 00:15:56.296 "nsid": 1, 00:15:56.296 "bdev_name": "Malloc1", 00:15:56.296 "name": "Malloc1", 00:15:56.296 "nguid": "D185709D41D44AA485E9D88F215D6F8F", 00:15:56.296 "uuid": "d185709d-41d4-4aa4-85e9-d88f215d6f8f" 00:15:56.296 }, 00:15:56.296 { 00:15:56.296 "nsid": 2, 00:15:56.296 "bdev_name": "Malloc3", 00:15:56.296 "name": "Malloc3", 00:15:56.296 "nguid": "36F12982403441F78F4A8F378442FBD8", 00:15:56.296 "uuid": "36f12982-4034-41f7-8f4a-8f378442fbd8" 00:15:56.296 } 00:15:56.296 ] 00:15:56.296 }, 00:15:56.296 { 00:15:56.296 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:56.296 "subtype": "NVMe", 00:15:56.296 "listen_addresses": [ 00:15:56.296 { 00:15:56.296 "trtype": "VFIOUSER", 00:15:56.296 "adrfam": "IPv4", 00:15:56.296 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:56.296 "trsvcid": "0" 00:15:56.296 } 00:15:56.296 ], 00:15:56.296 "allow_any_host": true, 00:15:56.296 "hosts": [], 00:15:56.296 "serial_number": "SPDK2", 00:15:56.296 "model_number": "SPDK bdev Controller", 00:15:56.296 "max_namespaces": 32, 00:15:56.296 "min_cntlid": 1, 00:15:56.296 "max_cntlid": 65519, 00:15:56.296 "namespaces": [ 00:15:56.296 { 00:15:56.296 "nsid": 1, 00:15:56.296 "bdev_name": "Malloc2", 00:15:56.296 "name": "Malloc2", 00:15:56.296 "nguid": "34034144698F4374A1D0FC584A9EE089", 00:15:56.296 "uuid": "34034144-698f-4374-a1d0-fc584a9ee089" 00:15:56.296 } 00:15:56.296 ] 00:15:56.296 } 00:15:56.296 ] 00:15:56.296 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2756858 00:15:56.296 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:56.296 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:56.296 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:56.296 10:09:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:56.296 [2024-05-15 10:09:41.919364] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:15:56.296 [2024-05-15 10:09:41.919399] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757103 ] 00:15:56.296 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.296 [2024-05-15 10:09:41.950838] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:56.296 [2024-05-15 10:09:41.959528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:56.296 [2024-05-15 10:09:41.959548] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4c3c0ee000 00:15:56.296 [2024-05-15 10:09:41.960523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.961523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.962533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.963536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.964541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.965544] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.966549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.967554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:56.296 [2024-05-15 10:09:41.968560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:56.296 [2024-05-15 10:09:41.968570] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4c3aeb2000 00:15:56.296 [2024-05-15 10:09:41.969895] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:56.296 [2024-05-15 10:09:41.986110] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:56.296 [2024-05-15 10:09:41.986133] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:56.296 [2024-05-15 10:09:41.991213] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:56.296 [2024-05-15 10:09:41.991254] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:56.296 [2024-05-15 10:09:41.991339] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:56.296 [2024-05-15 10:09:41.991352] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:56.296 [2024-05-15 10:09:41.991357] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:56.296 [2024-05-15 10:09:41.992216] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:56.296 [2024-05-15 10:09:41.992225] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:56.296 [2024-05-15 10:09:41.992232] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:56.296 [2024-05-15 10:09:41.993219] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:56.296 [2024-05-15 10:09:41.993227] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:56.296 [2024-05-15 10:09:41.993235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:56.296 [2024-05-15 10:09:41.994227] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:56.296 [2024-05-15 10:09:41.994236] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:56.296 [2024-05-15 10:09:41.995231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:56.296 [2024-05-15 10:09:41.995240] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:56.296 [2024-05-15 10:09:41.995245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:56.296 [2024-05-15 10:09:41.995251] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:56.296 [2024-05-15 10:09:41.995356] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:56.296 [2024-05-15 10:09:41.995361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:56.296 [2024-05-15 10:09:41.995366] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:56.296 [2024-05-15 10:09:41.996236] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:56.296 [2024-05-15 10:09:41.997241] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:56.296 [2024-05-15 10:09:41.998255] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:56.296 [2024-05-15 10:09:41.999258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:56.296 [2024-05-15 10:09:41.999303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:56.296 [2024-05-15 10:09:42.000268] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:56.296 [2024-05-15 10:09:42.000276] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:56.296 [2024-05-15 10:09:42.000281] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:56.296 [2024-05-15 10:09:42.000306] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:56.296 [2024-05-15 10:09:42.000314] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:56.296 [2024-05-15 10:09:42.000328] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:56.296 [2024-05-15 10:09:42.000332] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:56.297 [2024-05-15 10:09:42.000344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.007943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.007956] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:56.297 [2024-05-15 10:09:42.007961] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:56.297 [2024-05-15 10:09:42.007966] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:56.297 [2024-05-15 10:09:42.007970] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:56.297 [2024-05-15 10:09:42.008021] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:56.297 [2024-05-15 10:09:42.008026] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:56.297 [2024-05-15 10:09:42.008031] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.008041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.008053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.015298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.015313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.297 [2024-05-15 10:09:42.015321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.297 [2024-05-15 10:09:42.015329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.297 [2024-05-15 10:09:42.015337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.297 [2024-05-15 10:09:42.015344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.015351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.015360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.023301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.023308] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:56.297 [2024-05-15 10:09:42.023316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.023323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.023328] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.023336] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.031298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.031351] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.031358] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.031365] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:56.297 [2024-05-15 10:09:42.031370] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:56.297 [2024-05-15 10:09:42.031377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.039297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.039311] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:56.297 [2024-05-15 10:09:42.039322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.039329] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.039336] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:56.297 [2024-05-15 10:09:42.039340] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:56.297 [2024-05-15 10:09:42.039346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.047298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.047309] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.047316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.047326] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:56.297 [2024-05-15 10:09:42.047330] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:56.297 [2024-05-15 10:09:42.047336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.055297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.055309] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.055316] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.055325] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.055330] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.055335] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.055340] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:56.297 [2024-05-15 10:09:42.055345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:56.297 [2024-05-15 10:09:42.055350] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:56.297 [2024-05-15 10:09:42.055368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.063298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.063312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.071297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.071310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.079299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.079311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:56.297 [2024-05-15 10:09:42.087310] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:56.297 [2024-05-15 10:09:42.087314] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:56.297 [2024-05-15 10:09:42.087318] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:56.297 [2024-05-15 10:09:42.087321] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:56.297 [2024-05-15 10:09:42.087327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:56.297 [2024-05-15 10:09:42.087335] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:56.297 [2024-05-15 10:09:42.087341] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:56.297 [2024-05-15 10:09:42.087347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.087354] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:56.297 [2024-05-15 10:09:42.087358] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:56.297 [2024-05-15 10:09:42.087364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:56.297 [2024-05-15 10:09:42.087374] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:56.297 [2024-05-15 10:09:42.087378] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:56.297 [2024-05-15 10:09:42.087384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:56.560 [2024-05-15 10:09:42.095299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:56.560 [2024-05-15 10:09:42.095314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:56.560 [2024-05-15 10:09:42.095322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:56.560 [2024-05-15 10:09:42.095331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:56.560 ===================================================== 00:15:56.560 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:56.560 ===================================================== 00:15:56.560 Controller Capabilities/Features 00:15:56.560 ================================ 00:15:56.560 Vendor ID: 4e58 00:15:56.560 Subsystem Vendor ID: 4e58 00:15:56.560 Serial Number: SPDK2 00:15:56.560 Model Number: SPDK bdev Controller 00:15:56.560 Firmware Version: 24.05 00:15:56.560 Recommended Arb Burst: 6 00:15:56.560 IEEE OUI Identifier: 8d 6b 50 00:15:56.560 Multi-path I/O 00:15:56.560 May have multiple subsystem ports: Yes 00:15:56.560 May have multiple controllers: Yes 00:15:56.560 Associated with SR-IOV VF: No 00:15:56.560 Max Data Transfer Size: 131072 00:15:56.560 Max Number of Namespaces: 32 00:15:56.560 Max Number of I/O Queues: 127 00:15:56.560 NVMe Specification Version (VS): 1.3 00:15:56.560 NVMe Specification Version (Identify): 1.3 00:15:56.560 Maximum Queue Entries: 256 00:15:56.560 Contiguous Queues Required: Yes 00:15:56.560 Arbitration Mechanisms Supported 00:15:56.560 Weighted Round Robin: Not Supported 00:15:56.560 Vendor Specific: Not Supported 00:15:56.560 Reset Timeout: 15000 ms 00:15:56.560 Doorbell Stride: 4 bytes 00:15:56.560 NVM Subsystem Reset: Not Supported 00:15:56.560 Command Sets Supported 00:15:56.560 NVM Command Set: Supported 00:15:56.560 Boot Partition: Not Supported 00:15:56.560 Memory Page Size Minimum: 4096 bytes 00:15:56.560 Memory Page Size Maximum: 4096 bytes 00:15:56.560 Persistent Memory Region: Not Supported 00:15:56.560 Optional Asynchronous Events Supported 00:15:56.560 Namespace Attribute Notices: Supported 00:15:56.560 Firmware Activation Notices: Not Supported 00:15:56.560 ANA Change Notices: Not Supported 00:15:56.560 PLE Aggregate Log Change Notices: Not Supported 00:15:56.560 LBA Status Info Alert Notices: Not Supported 00:15:56.560 EGE Aggregate Log Change Notices: Not Supported 00:15:56.560 Normal NVM Subsystem Shutdown event: Not Supported 00:15:56.560 Zone Descriptor Change Notices: Not Supported 00:15:56.560 Discovery Log Change Notices: Not Supported 00:15:56.560 Controller Attributes 00:15:56.560 128-bit Host Identifier: Supported 00:15:56.560 Non-Operational Permissive Mode: Not Supported 00:15:56.560 NVM Sets: Not Supported 00:15:56.560 Read Recovery Levels: Not Supported 00:15:56.560 Endurance Groups: Not Supported 00:15:56.560 Predictable Latency Mode: Not Supported 00:15:56.560 Traffic Based Keep ALive: Not Supported 00:15:56.560 Namespace Granularity: Not Supported 00:15:56.560 SQ Associations: Not Supported 00:15:56.560 UUID List: Not Supported 00:15:56.560 Multi-Domain Subsystem: Not Supported 00:15:56.560 Fixed Capacity Management: Not Supported 00:15:56.560 Variable Capacity Management: Not Supported 00:15:56.560 Delete Endurance Group: Not Supported 00:15:56.560 Delete NVM Set: Not Supported 00:15:56.560 Extended LBA Formats Supported: Not Supported 00:15:56.560 Flexible Data Placement Supported: Not Supported 00:15:56.560 00:15:56.560 Controller Memory Buffer Support 00:15:56.560 ================================ 00:15:56.560 Supported: No 00:15:56.560 00:15:56.560 Persistent Memory Region Support 00:15:56.560 ================================ 00:15:56.560 Supported: No 00:15:56.560 00:15:56.560 Admin Command Set Attributes 00:15:56.560 ============================ 00:15:56.560 Security Send/Receive: Not Supported 00:15:56.560 Format NVM: Not Supported 00:15:56.560 Firmware Activate/Download: Not Supported 00:15:56.560 Namespace Management: Not Supported 00:15:56.560 Device Self-Test: Not Supported 00:15:56.560 Directives: Not Supported 00:15:56.560 NVMe-MI: Not Supported 00:15:56.560 Virtualization Management: Not Supported 00:15:56.560 Doorbell Buffer Config: Not Supported 00:15:56.560 Get LBA Status Capability: Not Supported 00:15:56.560 Command & Feature Lockdown Capability: Not Supported 00:15:56.560 Abort Command Limit: 4 00:15:56.560 Async Event Request Limit: 4 00:15:56.560 Number of Firmware Slots: N/A 00:15:56.560 Firmware Slot 1 Read-Only: N/A 00:15:56.560 Firmware Activation Without Reset: N/A 00:15:56.560 Multiple Update Detection Support: N/A 00:15:56.560 Firmware Update Granularity: No Information Provided 00:15:56.560 Per-Namespace SMART Log: No 00:15:56.560 Asymmetric Namespace Access Log Page: Not Supported 00:15:56.560 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:56.560 Command Effects Log Page: Supported 00:15:56.560 Get Log Page Extended Data: Supported 00:15:56.560 Telemetry Log Pages: Not Supported 00:15:56.560 Persistent Event Log Pages: Not Supported 00:15:56.560 Supported Log Pages Log Page: May Support 00:15:56.561 Commands Supported & Effects Log Page: Not Supported 00:15:56.561 Feature Identifiers & Effects Log Page:May Support 00:15:56.561 NVMe-MI Commands & Effects Log Page: May Support 00:15:56.561 Data Area 4 for Telemetry Log: Not Supported 00:15:56.561 Error Log Page Entries Supported: 128 00:15:56.561 Keep Alive: Supported 00:15:56.561 Keep Alive Granularity: 10000 ms 00:15:56.561 00:15:56.561 NVM Command Set Attributes 00:15:56.561 ========================== 00:15:56.561 Submission Queue Entry Size 00:15:56.561 Max: 64 00:15:56.561 Min: 64 00:15:56.561 Completion Queue Entry Size 00:15:56.561 Max: 16 00:15:56.561 Min: 16 00:15:56.561 Number of Namespaces: 32 00:15:56.561 Compare Command: Supported 00:15:56.561 Write Uncorrectable Command: Not Supported 00:15:56.561 Dataset Management Command: Supported 00:15:56.561 Write Zeroes Command: Supported 00:15:56.561 Set Features Save Field: Not Supported 00:15:56.561 Reservations: Not Supported 00:15:56.561 Timestamp: Not Supported 00:15:56.561 Copy: Supported 00:15:56.561 Volatile Write Cache: Present 00:15:56.561 Atomic Write Unit (Normal): 1 00:15:56.561 Atomic Write Unit (PFail): 1 00:15:56.561 Atomic Compare & Write Unit: 1 00:15:56.561 Fused Compare & Write: Supported 00:15:56.561 Scatter-Gather List 00:15:56.561 SGL Command Set: Supported (Dword aligned) 00:15:56.561 SGL Keyed: Not Supported 00:15:56.561 SGL Bit Bucket Descriptor: Not Supported 00:15:56.561 SGL Metadata Pointer: Not Supported 00:15:56.561 Oversized SGL: Not Supported 00:15:56.561 SGL Metadata Address: Not Supported 00:15:56.561 SGL Offset: Not Supported 00:15:56.561 Transport SGL Data Block: Not Supported 00:15:56.561 Replay Protected Memory Block: Not Supported 00:15:56.561 00:15:56.561 Firmware Slot Information 00:15:56.561 ========================= 00:15:56.561 Active slot: 1 00:15:56.561 Slot 1 Firmware Revision: 24.05 00:15:56.561 00:15:56.561 00:15:56.561 Commands Supported and Effects 00:15:56.561 ============================== 00:15:56.561 Admin Commands 00:15:56.561 -------------- 00:15:56.561 Get Log Page (02h): Supported 00:15:56.561 Identify (06h): Supported 00:15:56.561 Abort (08h): Supported 00:15:56.561 Set Features (09h): Supported 00:15:56.561 Get Features (0Ah): Supported 00:15:56.561 Asynchronous Event Request (0Ch): Supported 00:15:56.561 Keep Alive (18h): Supported 00:15:56.561 I/O Commands 00:15:56.561 ------------ 00:15:56.561 Flush (00h): Supported LBA-Change 00:15:56.561 Write (01h): Supported LBA-Change 00:15:56.561 Read (02h): Supported 00:15:56.561 Compare (05h): Supported 00:15:56.561 Write Zeroes (08h): Supported LBA-Change 00:15:56.561 Dataset Management (09h): Supported LBA-Change 00:15:56.561 Copy (19h): Supported LBA-Change 00:15:56.561 Unknown (79h): Supported LBA-Change 00:15:56.561 Unknown (7Ah): Supported 00:15:56.561 00:15:56.561 Error Log 00:15:56.561 ========= 00:15:56.561 00:15:56.561 Arbitration 00:15:56.561 =========== 00:15:56.561 Arbitration Burst: 1 00:15:56.561 00:15:56.561 Power Management 00:15:56.561 ================ 00:15:56.561 Number of Power States: 1 00:15:56.561 Current Power State: Power State #0 00:15:56.561 Power State #0: 00:15:56.561 Max Power: 0.00 W 00:15:56.561 Non-Operational State: Operational 00:15:56.561 Entry Latency: Not Reported 00:15:56.561 Exit Latency: Not Reported 00:15:56.561 Relative Read Throughput: 0 00:15:56.561 Relative Read Latency: 0 00:15:56.561 Relative Write Throughput: 0 00:15:56.561 Relative Write Latency: 0 00:15:56.561 Idle Power: Not Reported 00:15:56.561 Active Power: Not Reported 00:15:56.561 Non-Operational Permissive Mode: Not Supported 00:15:56.561 00:15:56.561 Health Information 00:15:56.561 ================== 00:15:56.561 Critical Warnings: 00:15:56.561 Available Spare Space: OK 00:15:56.561 Temperature: OK 00:15:56.561 Device Reliability: OK 00:15:56.561 Read Only: No 00:15:56.561 Volatile Memory Backup: OK 00:15:56.561 Current Temperature: 0 Kelvin (-2[2024-05-15 10:09:42.095432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:56.561 [2024-05-15 10:09:42.103298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:56.561 [2024-05-15 10:09:42.103327] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:56.561 [2024-05-15 10:09:42.103336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.561 [2024-05-15 10:09:42.103343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.561 [2024-05-15 10:09:42.103349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.561 [2024-05-15 10:09:42.103355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.561 [2024-05-15 10:09:42.103409] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:56.561 [2024-05-15 10:09:42.103419] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:56.561 [2024-05-15 10:09:42.104415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.561 [2024-05-15 10:09:42.104464] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:56.561 [2024-05-15 10:09:42.104471] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:56.561 [2024-05-15 10:09:42.105417] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:56.561 [2024-05-15 10:09:42.105428] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:56.561 [2024-05-15 10:09:42.105474] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:56.561 [2024-05-15 10:09:42.106853] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:56.561 73 Celsius) 00:15:56.561 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:56.561 Available Spare: 0% 00:15:56.561 Available Spare Threshold: 0% 00:15:56.561 Life Percentage Used: 0% 00:15:56.561 Data Units Read: 0 00:15:56.561 Data Units Written: 0 00:15:56.561 Host Read Commands: 0 00:15:56.561 Host Write Commands: 0 00:15:56.561 Controller Busy Time: 0 minutes 00:15:56.561 Power Cycles: 0 00:15:56.561 Power On Hours: 0 hours 00:15:56.561 Unsafe Shutdowns: 0 00:15:56.561 Unrecoverable Media Errors: 0 00:15:56.561 Lifetime Error Log Entries: 0 00:15:56.561 Warning Temperature Time: 0 minutes 00:15:56.561 Critical Temperature Time: 0 minutes 00:15:56.561 00:15:56.561 Number of Queues 00:15:56.561 ================ 00:15:56.561 Number of I/O Submission Queues: 127 00:15:56.561 Number of I/O Completion Queues: 127 00:15:56.561 00:15:56.561 Active Namespaces 00:15:56.561 ================= 00:15:56.561 Namespace ID:1 00:15:56.561 Error Recovery Timeout: Unlimited 00:15:56.561 Command Set Identifier: NVM (00h) 00:15:56.561 Deallocate: Supported 00:15:56.561 Deallocated/Unwritten Error: Not Supported 00:15:56.561 Deallocated Read Value: Unknown 00:15:56.561 Deallocate in Write Zeroes: Not Supported 00:15:56.561 Deallocated Guard Field: 0xFFFF 00:15:56.561 Flush: Supported 00:15:56.561 Reservation: Supported 00:15:56.561 Namespace Sharing Capabilities: Multiple Controllers 00:15:56.561 Size (in LBAs): 131072 (0GiB) 00:15:56.561 Capacity (in LBAs): 131072 (0GiB) 00:15:56.561 Utilization (in LBAs): 131072 (0GiB) 00:15:56.561 NGUID: 34034144698F4374A1D0FC584A9EE089 00:15:56.561 UUID: 34034144-698f-4374-a1d0-fc584a9ee089 00:15:56.561 Thin Provisioning: Not Supported 00:15:56.561 Per-NS Atomic Units: Yes 00:15:56.561 Atomic Boundary Size (Normal): 0 00:15:56.561 Atomic Boundary Size (PFail): 0 00:15:56.561 Atomic Boundary Offset: 0 00:15:56.561 Maximum Single Source Range Length: 65535 00:15:56.561 Maximum Copy Length: 65535 00:15:56.561 Maximum Source Range Count: 1 00:15:56.561 NGUID/EUI64 Never Reused: No 00:15:56.561 Namespace Write Protected: No 00:15:56.561 Number of LBA Formats: 1 00:15:56.561 Current LBA Format: LBA Format #00 00:15:56.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:56.561 00:15:56.561 10:09:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:56.561 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.562 [2024-05-15 10:09:42.291661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.864 Initializing NVMe Controllers 00:16:01.864 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.864 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:01.864 Initialization complete. Launching workers. 00:16:01.864 ======================================================== 00:16:01.864 Latency(us) 00:16:01.864 Device Information : IOPS MiB/s Average min max 00:16:01.864 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39963.40 156.11 3205.32 831.47 6828.53 00:16:01.864 ======================================================== 00:16:01.864 Total : 39963.40 156.11 3205.32 831.47 6828.53 00:16:01.864 00:16:01.864 [2024-05-15 10:09:47.399473] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.864 10:09:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:01.864 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.864 [2024-05-15 10:09:47.574050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.164 Initializing NVMe Controllers 00:16:07.164 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.164 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:07.164 Initialization complete. Launching workers. 00:16:07.164 ======================================================== 00:16:07.164 Latency(us) 00:16:07.164 Device Information : IOPS MiB/s Average min max 00:16:07.164 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36014.80 140.68 3554.04 1096.42 7290.35 00:16:07.164 ======================================================== 00:16:07.164 Total : 36014.80 140.68 3554.04 1096.42 7290.35 00:16:07.164 00:16:07.164 [2024-05-15 10:09:52.594893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.164 10:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:07.164 EAL: No free 2048 kB hugepages reported on node 1 00:16:07.164 [2024-05-15 10:09:52.775689] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.462 [2024-05-15 10:09:57.924375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.462 Initializing NVMe Controllers 00:16:12.462 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.462 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.462 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:12.462 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:12.462 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:12.462 Initialization complete. Launching workers. 00:16:12.462 Starting thread on core 2 00:16:12.462 Starting thread on core 3 00:16:12.462 Starting thread on core 1 00:16:12.462 10:09:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:12.462 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.462 [2024-05-15 10:09:58.183734] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.777 [2024-05-15 10:10:01.238556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.777 Initializing NVMe Controllers 00:16:15.777 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:15.777 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:15.777 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:15.777 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:15.777 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:15.777 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:15.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:15.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:15.778 Initialization complete. Launching workers. 00:16:15.778 Starting thread on core 1 with urgent priority queue 00:16:15.778 Starting thread on core 2 with urgent priority queue 00:16:15.778 Starting thread on core 3 with urgent priority queue 00:16:15.778 Starting thread on core 0 with urgent priority queue 00:16:15.778 SPDK bdev Controller (SPDK2 ) core 0: 8824.33 IO/s 11.33 secs/100000 ios 00:16:15.778 SPDK bdev Controller (SPDK2 ) core 1: 8224.00 IO/s 12.16 secs/100000 ios 00:16:15.778 SPDK bdev Controller (SPDK2 ) core 2: 15425.67 IO/s 6.48 secs/100000 ios 00:16:15.778 SPDK bdev Controller (SPDK2 ) core 3: 12128.33 IO/s 8.25 secs/100000 ios 00:16:15.778 ======================================================== 00:16:15.778 00:16:15.778 10:10:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:15.778 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.778 [2024-05-15 10:10:01.493612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.778 Initializing NVMe Controllers 00:16:15.778 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:15.778 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:15.778 Namespace ID: 1 size: 0GB 00:16:15.778 Initialization complete. 00:16:15.778 INFO: using host memory buffer for IO 00:16:15.778 Hello world! 00:16:15.778 [2024-05-15 10:10:01.504692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.778 10:10:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:16.040 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.040 [2024-05-15 10:10:01.761560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.428 Initializing NVMe Controllers 00:16:17.428 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.428 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.428 Initialization complete. Launching workers. 00:16:17.428 submit (in ns) avg, min, max = 8651.6, 3925.8, 7991105.0 00:16:17.428 complete (in ns) avg, min, max = 18070.5, 2413.3, 4002833.3 00:16:17.428 00:16:17.428 Submit histogram 00:16:17.428 ================ 00:16:17.428 Range in us Cumulative Count 00:16:17.428 3.920 - 3.947: 0.6951% ( 134) 00:16:17.428 3.947 - 3.973: 6.4429% ( 1108) 00:16:17.428 3.973 - 4.000: 17.1085% ( 2056) 00:16:17.428 4.000 - 4.027: 28.9672% ( 2286) 00:16:17.429 4.027 - 4.053: 38.3047% ( 1800) 00:16:17.429 4.053 - 4.080: 48.1403% ( 1896) 00:16:17.429 4.080 - 4.107: 63.2464% ( 2912) 00:16:17.429 4.107 - 4.133: 79.3484% ( 3104) 00:16:17.429 4.133 - 4.160: 90.9322% ( 2233) 00:16:17.429 4.160 - 4.187: 96.6540% ( 1103) 00:16:17.429 4.187 - 4.213: 98.6201% ( 379) 00:16:17.429 4.213 - 4.240: 99.2115% ( 114) 00:16:17.429 4.240 - 4.267: 99.3360% ( 24) 00:16:17.429 4.267 - 4.293: 99.3723% ( 7) 00:16:17.429 4.293 - 4.320: 99.3879% ( 3) 00:16:17.429 4.320 - 4.347: 99.4190% ( 6) 00:16:17.429 4.347 - 4.373: 99.4501% ( 6) 00:16:17.429 4.373 - 4.400: 99.4657% ( 3) 00:16:17.429 4.400 - 4.427: 99.4761% ( 2) 00:16:17.429 4.453 - 4.480: 99.4812% ( 1) 00:16:17.429 4.480 - 4.507: 99.4864% ( 1) 00:16:17.429 4.720 - 4.747: 99.4916% ( 1) 00:16:17.429 4.907 - 4.933: 99.4968% ( 1) 00:16:17.429 4.960 - 4.987: 99.5020% ( 1) 00:16:17.429 5.387 - 5.413: 99.5072% ( 1) 00:16:17.429 5.813 - 5.840: 99.5124% ( 1) 00:16:17.429 5.973 - 6.000: 99.5176% ( 1) 00:16:17.429 6.053 - 6.080: 99.5279% ( 2) 00:16:17.429 6.080 - 6.107: 99.5331% ( 1) 00:16:17.429 6.133 - 6.160: 99.5435% ( 2) 00:16:17.429 6.160 - 6.187: 99.5539% ( 2) 00:16:17.429 6.667 - 6.693: 99.5591% ( 1) 00:16:17.429 6.720 - 6.747: 99.5694% ( 2) 00:16:17.429 6.773 - 6.800: 99.5746% ( 1) 00:16:17.429 7.200 - 7.253: 99.5798% ( 1) 00:16:17.429 7.360 - 7.413: 99.5850% ( 1) 00:16:17.429 7.467 - 7.520: 99.5902% ( 1) 00:16:17.429 7.520 - 7.573: 99.6006% ( 2) 00:16:17.429 7.573 - 7.627: 99.6161% ( 3) 00:16:17.429 7.680 - 7.733: 99.6213% ( 1) 00:16:17.429 7.893 - 7.947: 99.6369% ( 3) 00:16:17.429 7.947 - 8.000: 99.6472% ( 2) 00:16:17.429 8.053 - 8.107: 99.6524% ( 1) 00:16:17.429 8.213 - 8.267: 99.6628% ( 2) 00:16:17.429 8.267 - 8.320: 99.6680% ( 1) 00:16:17.429 8.427 - 8.480: 99.6784% ( 2) 00:16:17.429 8.480 - 8.533: 99.6887% ( 2) 00:16:17.429 8.587 - 8.640: 99.6991% ( 2) 00:16:17.429 8.640 - 8.693: 99.7043% ( 1) 00:16:17.429 8.747 - 8.800: 99.7147% ( 2) 00:16:17.429 8.800 - 8.853: 99.7251% ( 2) 00:16:17.429 8.853 - 8.907: 99.7458% ( 4) 00:16:17.429 8.960 - 9.013: 99.7666% ( 4) 00:16:17.429 9.013 - 9.067: 99.7717% ( 1) 00:16:17.429 9.067 - 9.120: 99.7769% ( 1) 00:16:17.429 9.120 - 9.173: 99.7925% ( 3) 00:16:17.429 9.173 - 9.227: 99.7977% ( 1) 00:16:17.429 9.227 - 9.280: 99.8029% ( 1) 00:16:17.429 9.280 - 9.333: 99.8081% ( 1) 00:16:17.429 9.387 - 9.440: 99.8132% ( 1) 00:16:17.429 9.493 - 9.547: 99.8184% ( 1) 00:16:17.429 9.547 - 9.600: 99.8288% ( 2) 00:16:17.429 9.653 - 9.707: 99.8392% ( 2) 00:16:17.429 9.707 - 9.760: 99.8547% ( 3) 00:16:17.429 9.813 - 9.867: 99.8599% ( 1) 00:16:17.429 9.920 - 9.973: 99.8651% ( 1) 00:16:17.429 9.973 - 10.027: 99.8703% ( 1) 00:16:17.429 10.187 - 10.240: 99.8807% ( 2) 00:16:17.429 11.360 - 11.413: 99.8859% ( 1) 00:16:17.429 11.467 - 11.520: 99.8911% ( 1) 00:16:17.429 3986.773 - 4014.080: 99.9948% ( 20) 00:16:17.429 7973.547 - 8028.160: 100.0000% ( 1) 00:16:17.429 00:16:17.429 Complete histogram 00:16:17.429 ================== 00:16:17.429 Range in us Cumulative Count 00:16:17.429 2.413 - 2.427: 0.7159% ( 138) 00:16:17.429 2.427 - 2.440: 0.9286% ( 41) 00:16:17.429 2.440 - 2.453: 1.0998% ( 33) 00:16:17.429 2.453 - 2.467: 1.1983% ( 19) 00:16:17.429 2.467 - 2.480: 4.8970% ( 713) 00:16:17.429 2.480 - [2024-05-15 10:10:02.857977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.429 2.493: 52.0620% ( 9092) 00:16:17.429 2.493 - 2.507: 59.9626% ( 1523) 00:16:17.429 2.507 - 2.520: 74.4514% ( 2793) 00:16:17.429 2.520 - 2.533: 80.3445% ( 1136) 00:16:17.429 2.533 - 2.547: 81.9837% ( 316) 00:16:17.429 2.547 - 2.560: 85.5787% ( 693) 00:16:17.429 2.560 - 2.573: 90.8440% ( 1015) 00:16:17.429 2.573 - 2.587: 94.6517% ( 734) 00:16:17.429 2.587 - 2.600: 97.3077% ( 512) 00:16:17.429 2.600 - 2.613: 98.6201% ( 253) 00:16:17.429 2.613 - 2.627: 99.0870% ( 90) 00:16:17.429 2.627 - 2.640: 99.2426% ( 30) 00:16:17.429 2.640 - 2.653: 99.2841% ( 8) 00:16:17.429 2.653 - 2.667: 99.2997% ( 3) 00:16:17.429 2.680 - 2.693: 99.3049% ( 1) 00:16:17.429 2.707 - 2.720: 99.3101% ( 1) 00:16:17.429 2.720 - 2.733: 99.3152% ( 1) 00:16:17.429 2.787 - 2.800: 99.3204% ( 1) 00:16:17.429 5.040 - 5.067: 99.3256% ( 1) 00:16:17.429 5.413 - 5.440: 99.3308% ( 1) 00:16:17.429 5.547 - 5.573: 99.3360% ( 1) 00:16:17.429 5.653 - 5.680: 99.3412% ( 1) 00:16:17.429 5.733 - 5.760: 99.3464% ( 1) 00:16:17.429 5.867 - 5.893: 99.3516% ( 1) 00:16:17.429 5.920 - 5.947: 99.3567% ( 1) 00:16:17.429 5.947 - 5.973: 99.3619% ( 1) 00:16:17.429 6.000 - 6.027: 99.3671% ( 1) 00:16:17.429 6.080 - 6.107: 99.3723% ( 1) 00:16:17.429 6.133 - 6.160: 99.3775% ( 1) 00:16:17.429 6.267 - 6.293: 99.3827% ( 1) 00:16:17.429 6.507 - 6.533: 99.3879% ( 1) 00:16:17.429 6.640 - 6.667: 99.3931% ( 1) 00:16:17.429 6.667 - 6.693: 99.3982% ( 1) 00:16:17.429 6.693 - 6.720: 99.4086% ( 2) 00:16:17.429 6.720 - 6.747: 99.4190% ( 2) 00:16:17.429 6.773 - 6.800: 99.4294% ( 2) 00:16:17.429 6.800 - 6.827: 99.4346% ( 1) 00:16:17.429 6.827 - 6.880: 99.4449% ( 2) 00:16:17.429 6.880 - 6.933: 99.4553% ( 2) 00:16:17.429 6.933 - 6.987: 99.4605% ( 1) 00:16:17.429 6.987 - 7.040: 99.4657% ( 1) 00:16:17.429 7.040 - 7.093: 99.4761% ( 2) 00:16:17.429 7.093 - 7.147: 99.4812% ( 1) 00:16:17.429 7.147 - 7.200: 99.4916% ( 2) 00:16:17.429 7.200 - 7.253: 99.4968% ( 1) 00:16:17.429 7.253 - 7.307: 99.5072% ( 2) 00:16:17.429 7.360 - 7.413: 99.5124% ( 1) 00:16:17.429 7.413 - 7.467: 99.5176% ( 1) 00:16:17.429 7.627 - 7.680: 99.5227% ( 1) 00:16:17.429 7.680 - 7.733: 99.5279% ( 1) 00:16:17.429 7.733 - 7.787: 99.5331% ( 1) 00:16:17.429 7.840 - 7.893: 99.5383% ( 1) 00:16:17.429 7.947 - 8.000: 99.5539% ( 3) 00:16:17.429 8.000 - 8.053: 99.5591% ( 1) 00:16:17.429 8.160 - 8.213: 99.5642% ( 1) 00:16:17.429 8.267 - 8.320: 99.5694% ( 1) 00:16:17.429 8.587 - 8.640: 99.5746% ( 1) 00:16:17.429 8.907 - 8.960: 99.5798% ( 1) 00:16:17.429 9.013 - 9.067: 99.5850% ( 1) 00:16:17.429 11.253 - 11.307: 99.5902% ( 1) 00:16:17.429 13.440 - 13.493: 99.5954% ( 1) 00:16:17.429 14.187 - 14.293: 99.6006% ( 1) 00:16:17.429 15.467 - 15.573: 99.6057% ( 1) 00:16:17.429 16.000 - 16.107: 99.6109% ( 1) 00:16:17.429 3986.773 - 4014.080: 100.0000% ( 75) 00:16:17.429 00:16:17.429 10:10:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:17.429 10:10:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:17.429 10:10:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:17.429 10:10:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:17.429 10:10:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:17.429 [ 00:16:17.429 { 00:16:17.429 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:17.429 "subtype": "Discovery", 00:16:17.429 "listen_addresses": [], 00:16:17.429 "allow_any_host": true, 00:16:17.429 "hosts": [] 00:16:17.429 }, 00:16:17.429 { 00:16:17.429 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:17.429 "subtype": "NVMe", 00:16:17.429 "listen_addresses": [ 00:16:17.429 { 00:16:17.429 "trtype": "VFIOUSER", 00:16:17.430 "adrfam": "IPv4", 00:16:17.430 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:17.430 "trsvcid": "0" 00:16:17.430 } 00:16:17.430 ], 00:16:17.430 "allow_any_host": true, 00:16:17.430 "hosts": [], 00:16:17.430 "serial_number": "SPDK1", 00:16:17.430 "model_number": "SPDK bdev Controller", 00:16:17.430 "max_namespaces": 32, 00:16:17.430 "min_cntlid": 1, 00:16:17.430 "max_cntlid": 65519, 00:16:17.430 "namespaces": [ 00:16:17.430 { 00:16:17.430 "nsid": 1, 00:16:17.430 "bdev_name": "Malloc1", 00:16:17.430 "name": "Malloc1", 00:16:17.430 "nguid": "D185709D41D44AA485E9D88F215D6F8F", 00:16:17.430 "uuid": "d185709d-41d4-4aa4-85e9-d88f215d6f8f" 00:16:17.430 }, 00:16:17.430 { 00:16:17.430 "nsid": 2, 00:16:17.430 "bdev_name": "Malloc3", 00:16:17.430 "name": "Malloc3", 00:16:17.430 "nguid": "36F12982403441F78F4A8F378442FBD8", 00:16:17.430 "uuid": "36f12982-4034-41f7-8f4a-8f378442fbd8" 00:16:17.430 } 00:16:17.430 ] 00:16:17.430 }, 00:16:17.430 { 00:16:17.430 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:17.430 "subtype": "NVMe", 00:16:17.430 "listen_addresses": [ 00:16:17.430 { 00:16:17.430 "trtype": "VFIOUSER", 00:16:17.430 "adrfam": "IPv4", 00:16:17.430 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:17.430 "trsvcid": "0" 00:16:17.430 } 00:16:17.430 ], 00:16:17.430 "allow_any_host": true, 00:16:17.430 "hosts": [], 00:16:17.430 "serial_number": "SPDK2", 00:16:17.430 "model_number": "SPDK bdev Controller", 00:16:17.430 "max_namespaces": 32, 00:16:17.430 "min_cntlid": 1, 00:16:17.430 "max_cntlid": 65519, 00:16:17.430 "namespaces": [ 00:16:17.430 { 00:16:17.430 "nsid": 1, 00:16:17.430 "bdev_name": "Malloc2", 00:16:17.430 "name": "Malloc2", 00:16:17.430 "nguid": "34034144698F4374A1D0FC584A9EE089", 00:16:17.430 "uuid": "34034144-698f-4374-a1d0-fc584a9ee089" 00:16:17.430 } 00:16:17.430 ] 00:16:17.430 } 00:16:17.430 ] 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2761131 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # local i=0 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # return 0 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:17.430 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:17.430 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.691 Malloc4 00:16:17.692 [2024-05-15 10:10:03.234673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.692 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:17.692 [2024-05-15 10:10:03.404796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.692 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:17.692 Asynchronous Event Request test 00:16:17.692 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.692 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:17.692 Registering asynchronous event callbacks... 00:16:17.692 Starting namespace attribute notice tests for all controllers... 00:16:17.692 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:17.692 aer_cb - Changed Namespace 00:16:17.692 Cleaning up... 00:16:17.953 [ 00:16:17.953 { 00:16:17.953 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:17.953 "subtype": "Discovery", 00:16:17.953 "listen_addresses": [], 00:16:17.953 "allow_any_host": true, 00:16:17.953 "hosts": [] 00:16:17.953 }, 00:16:17.953 { 00:16:17.953 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:17.953 "subtype": "NVMe", 00:16:17.953 "listen_addresses": [ 00:16:17.953 { 00:16:17.953 "trtype": "VFIOUSER", 00:16:17.953 "adrfam": "IPv4", 00:16:17.953 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:17.953 "trsvcid": "0" 00:16:17.953 } 00:16:17.953 ], 00:16:17.953 "allow_any_host": true, 00:16:17.953 "hosts": [], 00:16:17.953 "serial_number": "SPDK1", 00:16:17.953 "model_number": "SPDK bdev Controller", 00:16:17.953 "max_namespaces": 32, 00:16:17.953 "min_cntlid": 1, 00:16:17.953 "max_cntlid": 65519, 00:16:17.953 "namespaces": [ 00:16:17.953 { 00:16:17.953 "nsid": 1, 00:16:17.953 "bdev_name": "Malloc1", 00:16:17.953 "name": "Malloc1", 00:16:17.953 "nguid": "D185709D41D44AA485E9D88F215D6F8F", 00:16:17.953 "uuid": "d185709d-41d4-4aa4-85e9-d88f215d6f8f" 00:16:17.953 }, 00:16:17.953 { 00:16:17.953 "nsid": 2, 00:16:17.953 "bdev_name": "Malloc3", 00:16:17.953 "name": "Malloc3", 00:16:17.953 "nguid": "36F12982403441F78F4A8F378442FBD8", 00:16:17.953 "uuid": "36f12982-4034-41f7-8f4a-8f378442fbd8" 00:16:17.953 } 00:16:17.953 ] 00:16:17.953 }, 00:16:17.953 { 00:16:17.953 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:17.953 "subtype": "NVMe", 00:16:17.953 "listen_addresses": [ 00:16:17.953 { 00:16:17.953 "trtype": "VFIOUSER", 00:16:17.953 "adrfam": "IPv4", 00:16:17.953 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:17.953 "trsvcid": "0" 00:16:17.953 } 00:16:17.953 ], 00:16:17.953 "allow_any_host": true, 00:16:17.953 "hosts": [], 00:16:17.953 "serial_number": "SPDK2", 00:16:17.953 "model_number": "SPDK bdev Controller", 00:16:17.953 "max_namespaces": 32, 00:16:17.953 "min_cntlid": 1, 00:16:17.953 "max_cntlid": 65519, 00:16:17.953 "namespaces": [ 00:16:17.953 { 00:16:17.953 "nsid": 1, 00:16:17.953 "bdev_name": "Malloc2", 00:16:17.953 "name": "Malloc2", 00:16:17.953 "nguid": "34034144698F4374A1D0FC584A9EE089", 00:16:17.953 "uuid": "34034144-698f-4374-a1d0-fc584a9ee089" 00:16:17.953 }, 00:16:17.953 { 00:16:17.953 "nsid": 2, 00:16:17.953 "bdev_name": "Malloc4", 00:16:17.953 "name": "Malloc4", 00:16:17.953 "nguid": "496876245A014DD9AA412FDC4284CD74", 00:16:17.953 "uuid": "49687624-5a01-4dd9-aa41-2fdc4284cd74" 00:16:17.953 } 00:16:17.953 ] 00:16:17.953 } 00:16:17.953 ] 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2761131 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2752376 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2752376 ']' 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2752376 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2752376 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2752376' 00:16:17.953 killing process with pid 2752376 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2752376 00:16:17.953 [2024-05-15 10:10:03.647313] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:17.953 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2752376 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2761210 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2761210' 00:16:18.216 Process pid: 2761210 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2761210 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # '[' -z 2761210 ']' 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:18.216 10:10:03 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:18.216 [2024-05-15 10:10:03.860003] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:18.216 [2024-05-15 10:10:03.860914] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:18.216 [2024-05-15 10:10:03.860954] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.216 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.216 [2024-05-15 10:10:03.921251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.216 [2024-05-15 10:10:03.953169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.216 [2024-05-15 10:10:03.953206] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.216 [2024-05-15 10:10:03.953214] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.216 [2024-05-15 10:10:03.953220] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.216 [2024-05-15 10:10:03.953226] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.216 [2024-05-15 10:10:03.953325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.216 [2024-05-15 10:10:03.953408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.216 [2024-05-15 10:10:03.953441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.216 [2024-05-15 10:10:03.953441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.478 [2024-05-15 10:10:04.013105] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:18.478 [2024-05-15 10:10:04.013175] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:18.478 [2024-05-15 10:10:04.014088] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:18.478 [2024-05-15 10:10:04.014483] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:18.478 [2024-05-15 10:10:04.014629] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:18.478 10:10:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:18.478 10:10:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@861 -- # return 0 00:16:18.478 10:10:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:19.423 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:19.423 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:19.423 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:19.423 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:19.423 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:19.685 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:19.685 Malloc1 00:16:19.685 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:19.948 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:19.948 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:20.210 [2024-05-15 10:10:05.870034] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:20.210 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:20.210 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:20.210 10:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:20.473 Malloc2 00:16:20.473 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:20.473 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:20.735 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2761210 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' -z 2761210 ']' 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # kill -0 2761210 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # uname 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2761210 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2761210' 00:16:21.052 killing process with pid 2761210 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # kill 2761210 00:16:21.052 [2024-05-15 10:10:06.640495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@971 -- # wait 2761210 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:21.052 00:16:21.052 real 0m49.256s 00:16:21.052 user 3m15.410s 00:16:21.052 sys 0m2.877s 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:21.052 10:10:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:21.052 ************************************ 00:16:21.052 END TEST nvmf_vfio_user 00:16:21.052 ************************************ 00:16:21.052 10:10:06 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:21.052 10:10:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:21.052 10:10:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:21.052 10:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.333 ************************************ 00:16:21.333 START TEST nvmf_vfio_user_nvme_compliance 00:16:21.333 ************************************ 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:21.333 * Looking for test storage... 00:16:21.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.333 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.334 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:21.334 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:21.334 10:10:06 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2761897 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2761897' 00:16:21.334 Process pid: 2761897 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2761897 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # '[' -z 2761897 ']' 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:21.334 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:21.334 [2024-05-15 10:10:07.061846] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:16:21.334 [2024-05-15 10:10:07.061920] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.334 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.596 [2024-05-15 10:10:07.128118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:21.596 [2024-05-15 10:10:07.168438] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.596 [2024-05-15 10:10:07.168483] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.596 [2024-05-15 10:10:07.168491] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.596 [2024-05-15 10:10:07.168498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.596 [2024-05-15 10:10:07.168504] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.596 [2024-05-15 10:10:07.168645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.596 [2024-05-15 10:10:07.168767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.596 [2024-05-15 10:10:07.168769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.170 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:22.170 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # return 0 00:16:22.170 10:10:07 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.116 malloc0 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.116 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.378 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.378 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:23.378 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.378 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:23.378 [2024-05-15 10:10:08.924928] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:23.378 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.378 10:10:08 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:23.378 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.378 00:16:23.378 00:16:23.378 CUnit - A unit testing framework for C - Version 2.1-3 00:16:23.378 http://cunit.sourceforge.net/ 00:16:23.378 00:16:23.378 00:16:23.378 Suite: nvme_compliance 00:16:23.378 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 10:10:09.090458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.378 [2024-05-15 10:10:09.091793] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:23.378 [2024-05-15 10:10:09.091803] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:23.378 [2024-05-15 10:10:09.091808] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:23.378 [2024-05-15 10:10:09.093482] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.378 passed 00:16:23.640 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 10:10:09.188071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.640 [2024-05-15 10:10:09.191087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.640 passed 00:16:23.640 Test: admin_identify_ns ...[2024-05-15 10:10:09.289553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.640 [2024-05-15 10:10:09.350304] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:23.640 [2024-05-15 10:10:09.358301] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:23.640 [2024-05-15 10:10:09.379424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.640 passed 00:16:23.902 Test: admin_get_features_mandatory_features ...[2024-05-15 10:10:09.470077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.902 [2024-05-15 10:10:09.474110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.902 passed 00:16:23.902 Test: admin_get_features_optional_features ...[2024-05-15 10:10:09.566638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.902 [2024-05-15 10:10:09.569661] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.902 passed 00:16:23.902 Test: admin_set_features_number_of_queues ...[2024-05-15 10:10:09.663536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.163 [2024-05-15 10:10:09.768405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.163 passed 00:16:24.163 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 10:10:09.862403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.163 [2024-05-15 10:10:09.865419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.163 passed 00:16:24.163 Test: admin_get_log_page_with_lpo ...[2024-05-15 10:10:09.958514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.425 [2024-05-15 10:10:10.026305] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:24.425 [2024-05-15 10:10:10.039347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.425 passed 00:16:24.425 Test: fabric_property_get ...[2024-05-15 10:10:10.133434] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.425 [2024-05-15 10:10:10.134665] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:24.425 [2024-05-15 10:10:10.136451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.425 passed 00:16:24.686 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 10:10:10.231052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.686 [2024-05-15 10:10:10.232281] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:24.686 [2024-05-15 10:10:10.234067] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.686 passed 00:16:24.686 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 10:10:10.326193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.686 [2024-05-15 10:10:10.411301] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:24.686 [2024-05-15 10:10:10.427297] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:24.686 [2024-05-15 10:10:10.432402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.686 passed 00:16:24.949 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 10:10:10.525050] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.949 [2024-05-15 10:10:10.526278] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:24.949 [2024-05-15 10:10:10.528074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.949 passed 00:16:24.949 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 10:10:10.619557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.949 [2024-05-15 10:10:10.699309] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:24.949 [2024-05-15 10:10:10.723301] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:24.949 [2024-05-15 10:10:10.728378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.210 passed 00:16:25.210 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 10:10:10.819092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.210 [2024-05-15 10:10:10.820319] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:25.210 [2024-05-15 10:10:10.820340] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:25.210 [2024-05-15 10:10:10.824132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.210 passed 00:16:25.210 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 10:10:10.916249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.472 [2024-05-15 10:10:11.009301] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:25.472 [2024-05-15 10:10:11.017301] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:25.472 [2024-05-15 10:10:11.025300] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:25.472 [2024-05-15 10:10:11.033299] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:25.472 [2024-05-15 10:10:11.062378] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.472 passed 00:16:25.472 Test: admin_create_io_sq_verify_pc ...[2024-05-15 10:10:11.153999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:25.472 [2024-05-15 10:10:11.170307] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:25.472 [2024-05-15 10:10:11.188158] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:25.472 passed 00:16:25.733 Test: admin_create_io_qp_max_qps ...[2024-05-15 10:10:11.281692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:26.682 [2024-05-15 10:10:12.387303] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:27.256 [2024-05-15 10:10:12.771341] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.256 passed 00:16:27.256 Test: admin_create_io_sq_shared_cq ...[2024-05-15 10:10:12.863565] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.256 [2024-05-15 10:10:12.999299] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:27.256 [2024-05-15 10:10:13.036361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.523 passed 00:16:27.523 00:16:27.523 Run Summary: Type Total Ran Passed Failed Inactive 00:16:27.523 suites 1 1 n/a 0 0 00:16:27.523 tests 18 18 18 0 0 00:16:27.523 asserts 360 360 360 0 n/a 00:16:27.523 00:16:27.523 Elapsed time = 1.651 seconds 00:16:27.523 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2761897 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' -z 2761897 ']' 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # kill -0 2761897 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # uname 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2761897 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2761897' 00:16:27.524 killing process with pid 2761897 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # kill 2761897 00:16:27.524 [2024-05-15 10:10:13.143440] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # wait 2761897 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:27.524 00:16:27.524 real 0m6.409s 00:16:27.524 user 0m18.395s 00:16:27.524 sys 0m0.490s 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:27.524 10:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.524 ************************************ 00:16:27.524 END TEST nvmf_vfio_user_nvme_compliance 00:16:27.524 ************************************ 00:16:27.524 10:10:13 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:27.524 10:10:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:27.524 10:10:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:27.524 10:10:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.785 ************************************ 00:16:27.785 START TEST nvmf_vfio_user_fuzz 00:16:27.785 ************************************ 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:27.785 * Looking for test storage... 00:16:27.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.785 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2763285 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2763285' 00:16:27.786 Process pid: 2763285 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2763285 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # '[' -z 2763285 ']' 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:27.786 10:10:13 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.731 10:10:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:28.731 10:10:14 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # return 0 00:16:28.731 10:10:14 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 malloc0 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:29.677 10:10:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:01.814 Fuzzing completed. Shutting down the fuzz application 00:17:01.814 00:17:01.814 Dumping successful admin opcodes: 00:17:01.814 8, 9, 10, 24, 00:17:01.814 Dumping successful io opcodes: 00:17:01.814 0, 00:17:01.814 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1113949, total successful commands: 4384, random_seed: 2961467264 00:17:01.814 NS: 0x200003a1ef00 admin qp, Total commands completed: 140057, total successful commands: 1135, random_seed: 1299702848 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2763285 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' -z 2763285 ']' 00:17:01.814 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # kill -0 2763285 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # uname 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2763285 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2763285' 00:17:01.815 killing process with pid 2763285 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # kill 2763285 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # wait 2763285 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:01.815 00:17:01.815 real 0m33.650s 00:17:01.815 user 0m38.357s 00:17:01.815 sys 0m24.802s 00:17:01.815 10:10:46 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:01.815 10:10:47 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.815 ************************************ 00:17:01.815 END TEST nvmf_vfio_user_fuzz 00:17:01.815 ************************************ 00:17:01.815 10:10:47 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:01.815 10:10:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:01.815 10:10:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:01.815 10:10:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:01.815 ************************************ 00:17:01.815 START TEST nvmf_host_management 00:17:01.815 ************************************ 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:01.815 * Looking for test storage... 00:17:01.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.815 10:10:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:08.464 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:08.464 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:08.464 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:08.464 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.464 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.726 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.726 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.726 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:08.726 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.726 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.726 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:08.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:17:08.988 00:17:08.988 --- 10.0.0.2 ping statistics --- 00:17:08.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.988 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:17:08.988 00:17:08.988 --- 10.0.0.1 ping statistics --- 00:17:08.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.988 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2773588 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2773588 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2773588 ']' 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:08.988 10:10:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.988 [2024-05-15 10:10:54.656936] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:08.988 [2024-05-15 10:10:54.657004] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.988 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.988 [2024-05-15 10:10:54.745903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.251 [2024-05-15 10:10:54.795630] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.251 [2024-05-15 10:10:54.795688] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.251 [2024-05-15 10:10:54.795696] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.251 [2024-05-15 10:10:54.795703] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.251 [2024-05-15 10:10:54.795710] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.251 [2024-05-15 10:10:54.795836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.251 [2024-05-15 10:10:54.796000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.251 [2024-05-15 10:10:54.796163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.251 [2024-05-15 10:10:54.796164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.824 [2024-05-15 10:10:55.482914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.824 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.825 Malloc0 00:17:09.825 [2024-05-15 10:10:55.545966] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:09.825 [2024-05-15 10:10:55.546200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2773696 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2773696 /var/tmp/bdevperf.sock 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 2773696 ']' 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:09.825 { 00:17:09.825 "params": { 00:17:09.825 "name": "Nvme$subsystem", 00:17:09.825 "trtype": "$TEST_TRANSPORT", 00:17:09.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:09.825 "adrfam": "ipv4", 00:17:09.825 "trsvcid": "$NVMF_PORT", 00:17:09.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:09.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:09.825 "hdgst": ${hdgst:-false}, 00:17:09.825 "ddgst": ${ddgst:-false} 00:17:09.825 }, 00:17:09.825 "method": "bdev_nvme_attach_controller" 00:17:09.825 } 00:17:09.825 EOF 00:17:09.825 )") 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:09.825 10:10:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:09.825 "params": { 00:17:09.825 "name": "Nvme0", 00:17:09.825 "trtype": "tcp", 00:17:09.825 "traddr": "10.0.0.2", 00:17:09.825 "adrfam": "ipv4", 00:17:09.825 "trsvcid": "4420", 00:17:09.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:09.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:09.825 "hdgst": false, 00:17:09.825 "ddgst": false 00:17:09.825 }, 00:17:09.825 "method": "bdev_nvme_attach_controller" 00:17:09.825 }' 00:17:10.087 [2024-05-15 10:10:55.656019] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:10.087 [2024-05-15 10:10:55.656072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773696 ] 00:17:10.087 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.087 [2024-05-15 10:10:55.714958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.087 [2024-05-15 10:10:55.745990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.349 Running I/O for 10 seconds... 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=385 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 385 -ge 100 ']' 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.925 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.925 [2024-05-15 10:10:56.489215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186a600 is same with the state(5) to be set 00:17:10.926 [2024-05-15 10:10:56.489268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186a600 is same with the state(5) to be set 00:17:10.926 [2024-05-15 10:10:56.490335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.490985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.490993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.926 [2024-05-15 10:10:56.491116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.926 [2024-05-15 10:10:56.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:10.927 [2024-05-15 10:10:56.491573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.491583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b25f0 is same with the state(5) to be set 00:17:10.927 [2024-05-15 10:10:56.491624] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16b25f0 was disconnected and freed. reset controller. 00:17:10.927 [2024-05-15 10:10:56.492831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:10.927 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.927 task offset: 54400 on job bdev=Nvme0n1 fails 00:17:10.927 00:17:10.927 Latency(us) 00:17:10.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:10.927 Job: Nvme0n1 ended in about 0.49 seconds with error 00:17:10.927 Verification LBA range: start 0x0 length 0x400 00:17:10.927 Nvme0n1 : 0.49 790.62 49.41 131.77 0.00 67698.09 1870.51 64662.19 00:17:10.927 =================================================================================================================== 00:17:10.927 Total : 790.62 49.41 131.77 0.00 67698.09 1870.51 64662.19 00:17:10.927 [2024-05-15 10:10:56.494843] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:10.927 [2024-05-15 10:10:56.494866] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a16c0 (9): Bad file descriptor 00:17:10.927 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:10.927 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:10.927 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:10.927 [2024-05-15 10:10:56.498980] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:17:10.927 [2024-05-15 10:10:56.499139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:10.927 [2024-05-15 10:10:56.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.927 [2024-05-15 10:10:56.499189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:17:10.927 [2024-05-15 10:10:56.499198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:17:10.927 [2024-05-15 10:10:56.499206] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:17:10.927 [2024-05-15 10:10:56.499214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12a16c0 00:17:10.927 [2024-05-15 10:10:56.499238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a16c0 (9): Bad file descriptor 00:17:10.927 [2024-05-15 10:10:56.499251] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:10.927 [2024-05-15 10:10:56.499260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:10.927 [2024-05-15 10:10:56.499269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:10.927 [2024-05-15 10:10:56.499284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:10.927 10:10:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:10.927 10:10:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2773696 00:17:11.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2773696) - No such process 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:11.871 { 00:17:11.871 "params": { 00:17:11.871 "name": "Nvme$subsystem", 00:17:11.871 "trtype": "$TEST_TRANSPORT", 00:17:11.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:11.871 "adrfam": "ipv4", 00:17:11.871 "trsvcid": "$NVMF_PORT", 00:17:11.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:11.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:11.871 "hdgst": ${hdgst:-false}, 00:17:11.871 "ddgst": ${ddgst:-false} 00:17:11.871 }, 00:17:11.871 "method": "bdev_nvme_attach_controller" 00:17:11.871 } 00:17:11.871 EOF 00:17:11.871 )") 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:11.871 10:10:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:11.871 "params": { 00:17:11.871 "name": "Nvme0", 00:17:11.871 "trtype": "tcp", 00:17:11.871 "traddr": "10.0.0.2", 00:17:11.871 "adrfam": "ipv4", 00:17:11.871 "trsvcid": "4420", 00:17:11.871 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:11.871 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:11.871 "hdgst": false, 00:17:11.871 "ddgst": false 00:17:11.871 }, 00:17:11.871 "method": "bdev_nvme_attach_controller" 00:17:11.871 }' 00:17:11.872 [2024-05-15 10:10:57.560935] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:11.872 [2024-05-15 10:10:57.560988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774155 ] 00:17:11.872 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.872 [2024-05-15 10:10:57.619671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.872 [2024-05-15 10:10:57.648922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.133 Running I/O for 1 seconds... 00:17:13.079 00:17:13.079 Latency(us) 00:17:13.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.079 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:13.079 Verification LBA range: start 0x0 length 0x400 00:17:13.079 Nvme0n1 : 1.01 954.77 59.67 0.00 0.00 66080.60 15182.51 66409.81 00:17:13.079 =================================================================================================================== 00:17:13.079 Total : 954.77 59.67 0.00 0.00 66080.60 15182.51 66409.81 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.341 10:10:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.341 rmmod nvme_tcp 00:17:13.341 rmmod nvme_fabrics 00:17:13.341 rmmod nvme_keyring 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2773588 ']' 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2773588 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 2773588 ']' 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 2773588 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2773588 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2773588' 00:17:13.341 killing process with pid 2773588 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 2773588 00:17:13.341 [2024-05-15 10:10:59.088181] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:13.341 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 2773588 00:17:13.604 [2024-05-15 10:10:59.185729] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.604 10:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.521 10:11:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.521 10:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:15.521 00:17:15.521 real 0m14.200s 00:17:15.521 user 0m21.926s 00:17:15.521 sys 0m6.556s 00:17:15.521 10:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:15.521 10:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:15.521 ************************************ 00:17:15.521 END TEST nvmf_host_management 00:17:15.521 ************************************ 00:17:15.783 10:11:01 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:15.783 10:11:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:15.783 10:11:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:15.783 10:11:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.783 ************************************ 00:17:15.783 START TEST nvmf_lvol 00:17:15.783 ************************************ 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:15.783 * Looking for test storage... 00:17:15.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:15.783 10:11:01 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.784 10:11:01 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:23.934 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:23.934 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:23.934 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:23.934 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:23.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:23.935 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:23.935 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:23.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:23.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:17:23.935 00:17:23.935 --- 10.0.0.2 ping statistics --- 00:17:23.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.935 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.448 ms 00:17:23.935 00:17:23.935 --- 10.0.0.1 ping statistics --- 00:17:23.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.935 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2778638 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2778638 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 2778638 ']' 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:23.935 10:11:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:23.935 [2024-05-15 10:11:08.651360] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:23.935 [2024-05-15 10:11:08.651414] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.935 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.935 [2024-05-15 10:11:08.718346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:23.935 [2024-05-15 10:11:08.752932] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.935 [2024-05-15 10:11:08.752975] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.935 [2024-05-15 10:11:08.752983] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.935 [2024-05-15 10:11:08.752989] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.935 [2024-05-15 10:11:08.752995] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.935 [2024-05-15 10:11:08.753136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.935 [2024-05-15 10:11:08.753257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.935 [2024-05-15 10:11:08.753260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:23.935 [2024-05-15 10:11:09.599921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.935 10:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.197 10:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:24.197 10:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:24.457 10:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:24.457 10:11:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:24.457 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:24.717 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=38d72565-c177-4b86-af20-53e0ed5934fb 00:17:24.717 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 38d72565-c177-4b86-af20-53e0ed5934fb lvol 20 00:17:24.978 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e8336139-5242-465e-bd05-b6e469c1eb27 00:17:24.978 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:24.978 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8336139-5242-465e-bd05-b6e469c1eb27 00:17:25.239 10:11:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:25.239 [2024-05-15 10:11:11.012954] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:25.239 [2024-05-15 10:11:11.013185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.501 10:11:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.501 10:11:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2779023 00:17:25.501 10:11:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:25.501 10:11:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:25.501 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.445 10:11:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e8336139-5242-465e-bd05-b6e469c1eb27 MY_SNAPSHOT 00:17:26.706 10:11:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a26be593-d885-4818-b164-143c1ac3a2c2 00:17:26.706 10:11:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e8336139-5242-465e-bd05-b6e469c1eb27 30 00:17:26.967 10:11:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a26be593-d885-4818-b164-143c1ac3a2c2 MY_CLONE 00:17:27.231 10:11:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f0738a6b-f905-4c41-b389-27e5f57c8654 00:17:27.231 10:11:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f0738a6b-f905-4c41-b389-27e5f57c8654 00:17:27.536 10:11:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2779023 00:17:37.549 Initializing NVMe Controllers 00:17:37.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:37.549 Controller IO queue size 128, less than required. 00:17:37.549 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:37.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:37.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:37.549 Initialization complete. Launching workers. 00:17:37.549 ======================================================== 00:17:37.549 Latency(us) 00:17:37.549 Device Information : IOPS MiB/s Average min max 00:17:37.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12469.30 48.71 10271.02 1630.13 60815.82 00:17:37.549 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17845.50 69.71 7173.97 2263.42 46823.54 00:17:37.549 ======================================================== 00:17:37.549 Total : 30314.80 118.42 8447.87 1630.13 60815.82 00:17:37.549 00:17:37.549 10:11:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.549 10:11:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8336139-5242-465e-bd05-b6e469c1eb27 00:17:37.549 10:11:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38d72565-c177-4b86-af20-53e0ed5934fb 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.549 rmmod nvme_tcp 00:17:37.549 rmmod nvme_fabrics 00:17:37.549 rmmod nvme_keyring 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2778638 ']' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2778638 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 2778638 ']' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 2778638 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2778638 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2778638' 00:17:37.549 killing process with pid 2778638 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 2778638 00:17:37.549 [2024-05-15 10:11:22.203216] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 2778638 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.549 10:11:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.938 10:11:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:38.938 00:17:38.938 real 0m23.050s 00:17:38.938 user 1m3.773s 00:17:38.938 sys 0m7.676s 00:17:38.938 10:11:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:38.938 10:11:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:38.938 ************************************ 00:17:38.938 END TEST nvmf_lvol 00:17:38.938 ************************************ 00:17:38.938 10:11:24 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:38.938 10:11:24 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:38.938 10:11:24 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:38.938 10:11:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.938 ************************************ 00:17:38.938 START TEST nvmf_lvs_grow 00:17:38.938 ************************************ 00:17:38.938 10:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:38.938 * Looking for test storage... 00:17:38.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.938 10:11:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.938 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.939 10:11:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:47.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:47.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:47.177 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:47.177 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.820 ms 00:17:47.177 00:17:47.177 --- 10.0.0.2 ping statistics --- 00:17:47.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.177 rtt min/avg/max/mdev = 0.820/0.820/0.820/0.000 ms 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.466 ms 00:17:47.177 00:17:47.177 --- 10.0.0.1 ping statistics --- 00:17:47.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.177 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.177 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2785358 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2785358 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 2785358 ']' 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:47.178 10:11:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.178 [2024-05-15 10:11:32.002051] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:47.178 [2024-05-15 10:11:32.002117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.178 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.178 [2024-05-15 10:11:32.073340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.178 [2024-05-15 10:11:32.111471] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.178 [2024-05-15 10:11:32.111516] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.178 [2024-05-15 10:11:32.111524] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.178 [2024-05-15 10:11:32.111531] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.178 [2024-05-15 10:11:32.111536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.178 [2024-05-15 10:11:32.111561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:47.178 [2024-05-15 10:11:32.945954] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:47.178 10:11:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:47.440 ************************************ 00:17:47.440 START TEST lvs_grow_clean 00:17:47.440 ************************************ 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:47.440 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:47.701 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=259d6071-8456-4913-ab82-3b072385002c 00:17:47.701 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:17:47.701 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:47.963 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:47.963 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:47.963 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 259d6071-8456-4913-ab82-3b072385002c lvol 150 00:17:47.963 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a51fad1-89f5-46ce-acfd-ccf2df02710a 00:17:47.963 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:47.963 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:48.224 [2024-05-15 10:11:33.806739] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:48.224 [2024-05-15 10:11:33.806789] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:48.224 true 00:17:48.224 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:17:48.224 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:48.224 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:48.224 10:11:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:48.486 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a51fad1-89f5-46ce-acfd-ccf2df02710a 00:17:48.486 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:48.747 [2024-05-15 10:11:34.416399] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:48.747 [2024-05-15 10:11:34.416618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.747 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2786035 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2786035 /var/tmp/bdevperf.sock 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 2786035 ']' 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:49.009 10:11:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:49.010 [2024-05-15 10:11:34.632523] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:17:49.010 [2024-05-15 10:11:34.632574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786035 ] 00:17:49.010 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.010 [2024-05-15 10:11:34.706953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.010 [2024-05-15 10:11:34.737887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.955 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:49.955 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:17:49.955 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:49.955 Nvme0n1 00:17:49.955 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:50.217 [ 00:17:50.217 { 00:17:50.217 "name": "Nvme0n1", 00:17:50.217 "aliases": [ 00:17:50.217 "1a51fad1-89f5-46ce-acfd-ccf2df02710a" 00:17:50.217 ], 00:17:50.217 "product_name": "NVMe disk", 00:17:50.217 "block_size": 4096, 00:17:50.217 "num_blocks": 38912, 00:17:50.217 "uuid": "1a51fad1-89f5-46ce-acfd-ccf2df02710a", 00:17:50.217 "assigned_rate_limits": { 00:17:50.217 "rw_ios_per_sec": 0, 00:17:50.217 "rw_mbytes_per_sec": 0, 00:17:50.217 "r_mbytes_per_sec": 0, 00:17:50.217 "w_mbytes_per_sec": 0 00:17:50.217 }, 00:17:50.217 "claimed": false, 00:17:50.217 "zoned": false, 00:17:50.217 "supported_io_types": { 00:17:50.217 "read": true, 00:17:50.217 "write": true, 00:17:50.217 "unmap": true, 00:17:50.217 "write_zeroes": true, 00:17:50.217 "flush": true, 00:17:50.217 "reset": true, 00:17:50.217 "compare": true, 00:17:50.217 "compare_and_write": true, 00:17:50.217 "abort": true, 00:17:50.217 "nvme_admin": true, 00:17:50.217 "nvme_io": true 00:17:50.217 }, 00:17:50.217 "memory_domains": [ 00:17:50.217 { 00:17:50.217 "dma_device_id": "system", 00:17:50.217 "dma_device_type": 1 00:17:50.217 } 00:17:50.217 ], 00:17:50.217 "driver_specific": { 00:17:50.217 "nvme": [ 00:17:50.217 { 00:17:50.217 "trid": { 00:17:50.217 "trtype": "TCP", 00:17:50.217 "adrfam": "IPv4", 00:17:50.217 "traddr": "10.0.0.2", 00:17:50.217 "trsvcid": "4420", 00:17:50.217 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:50.217 }, 00:17:50.217 "ctrlr_data": { 00:17:50.217 "cntlid": 1, 00:17:50.217 "vendor_id": "0x8086", 00:17:50.217 "model_number": "SPDK bdev Controller", 00:17:50.217 "serial_number": "SPDK0", 00:17:50.217 "firmware_revision": "24.05", 00:17:50.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.217 "oacs": { 00:17:50.217 "security": 0, 00:17:50.217 "format": 0, 00:17:50.217 "firmware": 0, 00:17:50.217 "ns_manage": 0 00:17:50.217 }, 00:17:50.217 "multi_ctrlr": true, 00:17:50.217 "ana_reporting": false 00:17:50.217 }, 00:17:50.217 "vs": { 00:17:50.217 "nvme_version": "1.3" 00:17:50.217 }, 00:17:50.217 "ns_data": { 00:17:50.217 "id": 1, 00:17:50.217 "can_share": true 00:17:50.217 } 00:17:50.217 } 00:17:50.217 ], 00:17:50.217 "mp_policy": "active_passive" 00:17:50.217 } 00:17:50.217 } 00:17:50.217 ] 00:17:50.217 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2786120 00:17:50.217 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:50.217 10:11:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.217 Running I/O for 10 seconds... 00:17:51.162 Latency(us) 00:17:51.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.162 Nvme0n1 : 1.00 17853.00 69.74 0.00 0.00 0.00 0.00 0.00 00:17:51.162 =================================================================================================================== 00:17:51.162 Total : 17853.00 69.74 0.00 0.00 0.00 0.00 0.00 00:17:51.162 00:17:52.108 10:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 259d6071-8456-4913-ab82-3b072385002c 00:17:52.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.108 Nvme0n1 : 2.00 18078.50 70.62 0.00 0.00 0.00 0.00 0.00 00:17:52.108 =================================================================================================================== 00:17:52.108 Total : 18078.50 70.62 0.00 0.00 0.00 0.00 0.00 00:17:52.108 00:17:52.370 true 00:17:52.370 10:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:52.370 10:11:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:17:52.370 10:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:52.370 10:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:52.370 10:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2786120 00:17:53.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.314 Nvme0n1 : 3.00 18175.67 71.00 0.00 0.00 0.00 0.00 0.00 00:17:53.314 =================================================================================================================== 00:17:53.314 Total : 18175.67 71.00 0.00 0.00 0.00 0.00 0.00 00:17:53.314 00:17:54.259 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.259 Nvme0n1 : 4.00 18247.25 71.28 0.00 0.00 0.00 0.00 0.00 00:17:54.259 =================================================================================================================== 00:17:54.259 Total : 18247.25 71.28 0.00 0.00 0.00 0.00 0.00 00:17:54.259 00:17:55.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.236 Nvme0n1 : 5.00 18284.20 71.42 0.00 0.00 0.00 0.00 0.00 00:17:55.236 =================================================================================================================== 00:17:55.236 Total : 18284.20 71.42 0.00 0.00 0.00 0.00 0.00 00:17:55.236 00:17:56.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.181 Nvme0n1 : 6.00 18317.33 71.55 0.00 0.00 0.00 0.00 0.00 00:17:56.181 =================================================================================================================== 00:17:56.181 Total : 18317.33 71.55 0.00 0.00 0.00 0.00 0.00 00:17:56.181 00:17:57.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.125 Nvme0n1 : 7.00 18344.71 71.66 0.00 0.00 0.00 0.00 0.00 00:17:57.125 =================================================================================================================== 00:17:57.125 Total : 18344.71 71.66 0.00 0.00 0.00 0.00 0.00 00:17:57.125 00:17:58.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.512 Nvme0n1 : 8.00 18363.62 71.73 0.00 0.00 0.00 0.00 0.00 00:17:58.512 =================================================================================================================== 00:17:58.512 Total : 18363.62 71.73 0.00 0.00 0.00 0.00 0.00 00:17:58.512 00:17:59.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.457 Nvme0n1 : 9.00 18385.44 71.82 0.00 0.00 0.00 0.00 0.00 00:17:59.457 =================================================================================================================== 00:17:59.457 Total : 18385.44 71.82 0.00 0.00 0.00 0.00 0.00 00:17:59.457 00:18:00.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.403 Nvme0n1 : 10.00 18396.50 71.86 0.00 0.00 0.00 0.00 0.00 00:18:00.403 =================================================================================================================== 00:18:00.403 Total : 18396.50 71.86 0.00 0.00 0.00 0.00 0.00 00:18:00.403 00:18:00.403 00:18:00.403 Latency(us) 00:18:00.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.403 Nvme0n1 : 10.01 18399.51 71.87 0.00 0.00 6952.75 4614.83 25668.27 00:18:00.403 =================================================================================================================== 00:18:00.403 Total : 18399.51 71.87 0.00 0.00 6952.75 4614.83 25668.27 00:18:00.403 0 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2786035 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 2786035 ']' 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 2786035 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2786035 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2786035' 00:18:00.403 killing process with pid 2786035 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 2786035 00:18:00.403 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.403 00:18:00.403 Latency(us) 00:18:00.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.403 =================================================================================================================== 00:18:00.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.403 10:11:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 2786035 00:18:00.403 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:00.665 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.665 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:18:00.665 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:00.926 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:00.926 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:00.926 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:00.926 [2024-05-15 10:11:46.707421] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:18:01.188 request: 00:18:01.188 { 00:18:01.188 "uuid": "259d6071-8456-4913-ab82-3b072385002c", 00:18:01.188 "method": "bdev_lvol_get_lvstores", 00:18:01.188 "req_id": 1 00:18:01.188 } 00:18:01.188 Got JSON-RPC error response 00:18:01.188 response: 00:18:01.188 { 00:18:01.188 "code": -19, 00:18:01.188 "message": "No such device" 00:18:01.188 } 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:01.188 10:11:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:01.449 aio_bdev 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a51fad1-89f5-46ce-acfd-ccf2df02710a 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=1a51fad1-89f5-46ce-acfd-ccf2df02710a 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:01.449 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a51fad1-89f5-46ce-acfd-ccf2df02710a -t 2000 00:18:01.709 [ 00:18:01.709 { 00:18:01.709 "name": "1a51fad1-89f5-46ce-acfd-ccf2df02710a", 00:18:01.709 "aliases": [ 00:18:01.709 "lvs/lvol" 00:18:01.709 ], 00:18:01.709 "product_name": "Logical Volume", 00:18:01.709 "block_size": 4096, 00:18:01.709 "num_blocks": 38912, 00:18:01.709 "uuid": "1a51fad1-89f5-46ce-acfd-ccf2df02710a", 00:18:01.709 "assigned_rate_limits": { 00:18:01.709 "rw_ios_per_sec": 0, 00:18:01.709 "rw_mbytes_per_sec": 0, 00:18:01.709 "r_mbytes_per_sec": 0, 00:18:01.709 "w_mbytes_per_sec": 0 00:18:01.709 }, 00:18:01.709 "claimed": false, 00:18:01.709 "zoned": false, 00:18:01.709 "supported_io_types": { 00:18:01.709 "read": true, 00:18:01.709 "write": true, 00:18:01.709 "unmap": true, 00:18:01.709 "write_zeroes": true, 00:18:01.709 "flush": false, 00:18:01.709 "reset": true, 00:18:01.709 "compare": false, 00:18:01.710 "compare_and_write": false, 00:18:01.710 "abort": false, 00:18:01.710 "nvme_admin": false, 00:18:01.710 "nvme_io": false 00:18:01.710 }, 00:18:01.710 "driver_specific": { 00:18:01.710 "lvol": { 00:18:01.710 "lvol_store_uuid": "259d6071-8456-4913-ab82-3b072385002c", 00:18:01.710 "base_bdev": "aio_bdev", 00:18:01.710 "thin_provision": false, 00:18:01.710 "num_allocated_clusters": 38, 00:18:01.710 "snapshot": false, 00:18:01.710 "clone": false, 00:18:01.710 "esnap_clone": false 00:18:01.710 } 00:18:01.710 } 00:18:01.710 } 00:18:01.710 ] 00:18:01.710 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:18:01.710 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:18:01.710 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:01.971 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:01.971 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 259d6071-8456-4913-ab82-3b072385002c 00:18:01.971 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:01.971 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:01.971 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a51fad1-89f5-46ce-acfd-ccf2df02710a 00:18:02.232 10:11:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 259d6071-8456-4913-ab82-3b072385002c 00:18:02.494 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:02.494 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.756 00:18:02.756 real 0m15.285s 00:18:02.756 user 0m15.004s 00:18:02.756 sys 0m1.273s 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:02.756 ************************************ 00:18:02.756 END TEST lvs_grow_clean 00:18:02.756 ************************************ 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:02.756 ************************************ 00:18:02.756 START TEST lvs_grow_dirty 00:18:02.756 ************************************ 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.756 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.017 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:03.017 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:03.017 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:03.017 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:03.017 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:03.278 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:03.278 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:03.278 10:11:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 lvol 150 00:18:03.278 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:03.278 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:03.278 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:03.539 [2024-05-15 10:11:49.179680] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:03.539 [2024-05-15 10:11:49.179729] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:03.539 true 00:18:03.539 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:03.539 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:03.800 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:03.800 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:03.800 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:04.061 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:04.061 [2024-05-15 10:11:49.805574] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.061 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2789004 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2789004 /var/tmp/bdevperf.sock 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2789004 ']' 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:04.322 10:11:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:04.322 [2024-05-15 10:11:50.017610] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:04.322 [2024-05-15 10:11:50.017679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789004 ] 00:18:04.322 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.322 [2024-05-15 10:11:50.096741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.583 [2024-05-15 10:11:50.126743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.155 10:11:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:05.155 10:11:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:18:05.155 10:11:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:05.416 Nvme0n1 00:18:05.416 10:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:05.678 [ 00:18:05.678 { 00:18:05.678 "name": "Nvme0n1", 00:18:05.678 "aliases": [ 00:18:05.678 "29c64065-9b58-4f03-b3a3-f11cdd81ab18" 00:18:05.678 ], 00:18:05.678 "product_name": "NVMe disk", 00:18:05.678 "block_size": 4096, 00:18:05.678 "num_blocks": 38912, 00:18:05.678 "uuid": "29c64065-9b58-4f03-b3a3-f11cdd81ab18", 00:18:05.678 "assigned_rate_limits": { 00:18:05.678 "rw_ios_per_sec": 0, 00:18:05.678 "rw_mbytes_per_sec": 0, 00:18:05.678 "r_mbytes_per_sec": 0, 00:18:05.678 "w_mbytes_per_sec": 0 00:18:05.678 }, 00:18:05.678 "claimed": false, 00:18:05.678 "zoned": false, 00:18:05.678 "supported_io_types": { 00:18:05.678 "read": true, 00:18:05.678 "write": true, 00:18:05.678 "unmap": true, 00:18:05.678 "write_zeroes": true, 00:18:05.678 "flush": true, 00:18:05.678 "reset": true, 00:18:05.678 "compare": true, 00:18:05.678 "compare_and_write": true, 00:18:05.678 "abort": true, 00:18:05.678 "nvme_admin": true, 00:18:05.678 "nvme_io": true 00:18:05.678 }, 00:18:05.678 "memory_domains": [ 00:18:05.678 { 00:18:05.678 "dma_device_id": "system", 00:18:05.678 "dma_device_type": 1 00:18:05.678 } 00:18:05.678 ], 00:18:05.678 "driver_specific": { 00:18:05.678 "nvme": [ 00:18:05.678 { 00:18:05.678 "trid": { 00:18:05.678 "trtype": "TCP", 00:18:05.678 "adrfam": "IPv4", 00:18:05.678 "traddr": "10.0.0.2", 00:18:05.678 "trsvcid": "4420", 00:18:05.678 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:05.678 }, 00:18:05.678 "ctrlr_data": { 00:18:05.678 "cntlid": 1, 00:18:05.678 "vendor_id": "0x8086", 00:18:05.678 "model_number": "SPDK bdev Controller", 00:18:05.678 "serial_number": "SPDK0", 00:18:05.678 "firmware_revision": "24.05", 00:18:05.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:05.678 "oacs": { 00:18:05.678 "security": 0, 00:18:05.678 "format": 0, 00:18:05.678 "firmware": 0, 00:18:05.678 "ns_manage": 0 00:18:05.678 }, 00:18:05.678 "multi_ctrlr": true, 00:18:05.678 "ana_reporting": false 00:18:05.678 }, 00:18:05.678 "vs": { 00:18:05.678 "nvme_version": "1.3" 00:18:05.678 }, 00:18:05.678 "ns_data": { 00:18:05.678 "id": 1, 00:18:05.678 "can_share": true 00:18:05.678 } 00:18:05.678 } 00:18:05.678 ], 00:18:05.678 "mp_policy": "active_passive" 00:18:05.678 } 00:18:05.678 } 00:18:05.678 ] 00:18:05.678 10:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2789184 00:18:05.678 10:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:05.678 10:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.678 Running I/O for 10 seconds... 00:18:07.067 Latency(us) 00:18:07.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.067 Nvme0n1 : 1.00 17859.00 69.76 0.00 0.00 0.00 0.00 0.00 00:18:07.067 =================================================================================================================== 00:18:07.067 Total : 17859.00 69.76 0.00 0.00 0.00 0.00 0.00 00:18:07.067 00:18:07.640 10:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:07.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.901 Nvme0n1 : 2.00 18119.50 70.78 0.00 0.00 0.00 0.00 0.00 00:18:07.901 =================================================================================================================== 00:18:07.901 Total : 18119.50 70.78 0.00 0.00 0.00 0.00 0.00 00:18:07.901 00:18:07.901 true 00:18:07.901 10:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:07.901 10:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:07.901 10:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:07.901 10:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:07.901 10:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2789184 00:18:08.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.845 Nvme0n1 : 3.00 18223.67 71.19 0.00 0.00 0.00 0.00 0.00 00:18:08.845 =================================================================================================================== 00:18:08.845 Total : 18223.67 71.19 0.00 0.00 0.00 0.00 0.00 00:18:08.845 00:18:09.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.786 Nvme0n1 : 4.00 18275.75 71.39 0.00 0.00 0.00 0.00 0.00 00:18:09.786 =================================================================================================================== 00:18:09.786 Total : 18275.75 71.39 0.00 0.00 0.00 0.00 0.00 00:18:09.786 00:18:10.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.849 Nvme0n1 : 5.00 18317.40 71.55 0.00 0.00 0.00 0.00 0.00 00:18:10.849 =================================================================================================================== 00:18:10.849 Total : 18317.40 71.55 0.00 0.00 0.00 0.00 0.00 00:18:10.849 00:18:11.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.792 Nvme0n1 : 6.00 18336.50 71.63 0.00 0.00 0.00 0.00 0.00 00:18:11.792 =================================================================================================================== 00:18:11.792 Total : 18336.50 71.63 0.00 0.00 0.00 0.00 0.00 00:18:11.792 00:18:12.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.736 Nvme0n1 : 7.00 18365.86 71.74 0.00 0.00 0.00 0.00 0.00 00:18:12.736 =================================================================================================================== 00:18:12.736 Total : 18365.86 71.74 0.00 0.00 0.00 0.00 0.00 00:18:12.736 00:18:13.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.682 Nvme0n1 : 8.00 18384.38 71.81 0.00 0.00 0.00 0.00 0.00 00:18:13.682 =================================================================================================================== 00:18:13.682 Total : 18384.38 71.81 0.00 0.00 0.00 0.00 0.00 00:18:13.682 00:18:15.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.066 Nvme0n1 : 9.00 18391.00 71.84 0.00 0.00 0.00 0.00 0.00 00:18:15.066 =================================================================================================================== 00:18:15.066 Total : 18391.00 71.84 0.00 0.00 0.00 0.00 0.00 00:18:15.066 00:18:16.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.010 Nvme0n1 : 10.00 18400.30 71.88 0.00 0.00 0.00 0.00 0.00 00:18:16.010 =================================================================================================================== 00:18:16.010 Total : 18400.30 71.88 0.00 0.00 0.00 0.00 0.00 00:18:16.010 00:18:16.010 00:18:16.010 Latency(us) 00:18:16.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.010 Nvme0n1 : 10.00 18404.36 71.89 0.00 0.00 6951.20 4833.28 25012.91 00:18:16.010 =================================================================================================================== 00:18:16.010 Total : 18404.36 71.89 0.00 0.00 6951.20 4833.28 25012.91 00:18:16.010 0 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2789004 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 2789004 ']' 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 2789004 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2789004 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2789004' 00:18:16.010 killing process with pid 2789004 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 2789004 00:18:16.010 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.010 00:18:16.010 Latency(us) 00:18:16.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.010 =================================================================================================================== 00:18:16.010 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 2789004 00:18:16.010 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:16.271 10:12:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:16.271 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:16.271 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2785358 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2785358 00:18:16.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2785358 Killed "${NVMF_APP[@]}" "$@" 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2791613 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2791613 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 2791613 ']' 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:16.532 10:12:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:16.532 [2024-05-15 10:12:02.287148] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:16.532 [2024-05-15 10:12:02.287204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.532 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.794 [2024-05-15 10:12:02.352261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.794 [2024-05-15 10:12:02.383369] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.794 [2024-05-15 10:12:02.383406] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.794 [2024-05-15 10:12:02.383414] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.794 [2024-05-15 10:12:02.383420] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.794 [2024-05-15 10:12:02.383426] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.794 [2024-05-15 10:12:02.383449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.365 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:17.626 [2024-05-15 10:12:03.222059] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:17.626 [2024-05-15 10:12:03.222147] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:17.626 [2024-05-15 10:12:03.222178] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:17.626 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29c64065-9b58-4f03-b3a3-f11cdd81ab18 -t 2000 00:18:17.888 [ 00:18:17.888 { 00:18:17.888 "name": "29c64065-9b58-4f03-b3a3-f11cdd81ab18", 00:18:17.888 "aliases": [ 00:18:17.888 "lvs/lvol" 00:18:17.888 ], 00:18:17.888 "product_name": "Logical Volume", 00:18:17.888 "block_size": 4096, 00:18:17.888 "num_blocks": 38912, 00:18:17.888 "uuid": "29c64065-9b58-4f03-b3a3-f11cdd81ab18", 00:18:17.888 "assigned_rate_limits": { 00:18:17.888 "rw_ios_per_sec": 0, 00:18:17.888 "rw_mbytes_per_sec": 0, 00:18:17.888 "r_mbytes_per_sec": 0, 00:18:17.888 "w_mbytes_per_sec": 0 00:18:17.888 }, 00:18:17.888 "claimed": false, 00:18:17.888 "zoned": false, 00:18:17.888 "supported_io_types": { 00:18:17.888 "read": true, 00:18:17.888 "write": true, 00:18:17.888 "unmap": true, 00:18:17.888 "write_zeroes": true, 00:18:17.888 "flush": false, 00:18:17.888 "reset": true, 00:18:17.888 "compare": false, 00:18:17.888 "compare_and_write": false, 00:18:17.888 "abort": false, 00:18:17.888 "nvme_admin": false, 00:18:17.888 "nvme_io": false 00:18:17.888 }, 00:18:17.888 "driver_specific": { 00:18:17.888 "lvol": { 00:18:17.888 "lvol_store_uuid": "8f8d6972-e684-41fd-a172-7bbc6fa36cc6", 00:18:17.888 "base_bdev": "aio_bdev", 00:18:17.888 "thin_provision": false, 00:18:17.888 "num_allocated_clusters": 38, 00:18:17.888 "snapshot": false, 00:18:17.888 "clone": false, 00:18:17.888 "esnap_clone": false 00:18:17.888 } 00:18:17.888 } 00:18:17.888 } 00:18:17.888 ] 00:18:17.888 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:18:17.888 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:17.888 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:18.150 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:18.150 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:18.150 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:18.150 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:18.150 10:12:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:18.412 [2024-05-15 10:12:03.977997] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:18.412 request: 00:18:18.412 { 00:18:18.412 "uuid": "8f8d6972-e684-41fd-a172-7bbc6fa36cc6", 00:18:18.412 "method": "bdev_lvol_get_lvstores", 00:18:18.412 "req_id": 1 00:18:18.412 } 00:18:18.412 Got JSON-RPC error response 00:18:18.412 response: 00:18:18.412 { 00:18:18.412 "code": -19, 00:18:18.412 "message": "No such device" 00:18:18.412 } 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:18:18.412 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:18.674 aio_bdev 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:18.674 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29c64065-9b58-4f03-b3a3-f11cdd81ab18 -t 2000 00:18:18.936 [ 00:18:18.936 { 00:18:18.936 "name": "29c64065-9b58-4f03-b3a3-f11cdd81ab18", 00:18:18.936 "aliases": [ 00:18:18.936 "lvs/lvol" 00:18:18.936 ], 00:18:18.936 "product_name": "Logical Volume", 00:18:18.936 "block_size": 4096, 00:18:18.936 "num_blocks": 38912, 00:18:18.936 "uuid": "29c64065-9b58-4f03-b3a3-f11cdd81ab18", 00:18:18.936 "assigned_rate_limits": { 00:18:18.936 "rw_ios_per_sec": 0, 00:18:18.936 "rw_mbytes_per_sec": 0, 00:18:18.936 "r_mbytes_per_sec": 0, 00:18:18.936 "w_mbytes_per_sec": 0 00:18:18.936 }, 00:18:18.936 "claimed": false, 00:18:18.936 "zoned": false, 00:18:18.936 "supported_io_types": { 00:18:18.936 "read": true, 00:18:18.936 "write": true, 00:18:18.936 "unmap": true, 00:18:18.936 "write_zeroes": true, 00:18:18.936 "flush": false, 00:18:18.936 "reset": true, 00:18:18.936 "compare": false, 00:18:18.936 "compare_and_write": false, 00:18:18.936 "abort": false, 00:18:18.936 "nvme_admin": false, 00:18:18.936 "nvme_io": false 00:18:18.936 }, 00:18:18.936 "driver_specific": { 00:18:18.936 "lvol": { 00:18:18.936 "lvol_store_uuid": "8f8d6972-e684-41fd-a172-7bbc6fa36cc6", 00:18:18.936 "base_bdev": "aio_bdev", 00:18:18.936 "thin_provision": false, 00:18:18.936 "num_allocated_clusters": 38, 00:18:18.936 "snapshot": false, 00:18:18.936 "clone": false, 00:18:18.936 "esnap_clone": false 00:18:18.936 } 00:18:18.936 } 00:18:18.936 } 00:18:18.936 ] 00:18:18.936 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:18:18.936 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:18.936 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:19.198 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:19.198 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:19.198 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:19.198 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:19.198 10:12:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 29c64065-9b58-4f03-b3a3-f11cdd81ab18 00:18:19.459 10:12:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f8d6972-e684-41fd-a172-7bbc6fa36cc6 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.721 00:18:19.721 real 0m17.066s 00:18:19.721 user 0m44.675s 00:18:19.721 sys 0m2.817s 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:19.721 ************************************ 00:18:19.721 END TEST lvs_grow_dirty 00:18:19.721 ************************************ 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:18:19.721 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:19.721 nvmf_trace.0 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.983 rmmod nvme_tcp 00:18:19.983 rmmod nvme_fabrics 00:18:19.983 rmmod nvme_keyring 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2791613 ']' 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2791613 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 2791613 ']' 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 2791613 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2791613 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2791613' 00:18:19.983 killing process with pid 2791613 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 2791613 00:18:19.983 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 2791613 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.245 10:12:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.165 10:12:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.165 00:18:22.165 real 0m43.347s 00:18:22.165 user 1m5.517s 00:18:22.165 sys 0m10.022s 00:18:22.165 10:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:22.165 10:12:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:22.165 ************************************ 00:18:22.165 END TEST nvmf_lvs_grow 00:18:22.165 ************************************ 00:18:22.165 10:12:07 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:22.165 10:12:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:22.165 10:12:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:22.165 10:12:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.165 ************************************ 00:18:22.165 START TEST nvmf_bdev_io_wait 00:18:22.165 ************************************ 00:18:22.165 10:12:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:22.428 * Looking for test storage... 00:18:22.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.428 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.429 10:12:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:30.586 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:30.586 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:30.586 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:30.586 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:30.586 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.587 10:12:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:18:30.587 00:18:30.587 --- 10.0.0.2 ping statistics --- 00:18:30.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.587 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:18:30.587 00:18:30.587 --- 10.0.0.1 ping statistics --- 00:18:30.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.587 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2796802 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2796802 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 2796802 ']' 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:30.587 10:12:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 [2024-05-15 10:12:15.381905] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:30.587 [2024-05-15 10:12:15.381977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.587 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.587 [2024-05-15 10:12:15.454357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.587 [2024-05-15 10:12:15.496755] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.587 [2024-05-15 10:12:15.496801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.587 [2024-05-15 10:12:15.496810] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.587 [2024-05-15 10:12:15.496817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.587 [2024-05-15 10:12:15.496822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.587 [2024-05-15 10:12:15.496968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.587 [2024-05-15 10:12:15.497092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.587 [2024-05-15 10:12:15.497246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.587 [2024-05-15 10:12:15.497247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 [2024-05-15 10:12:16.269397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 Malloc0 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:30.587 [2024-05-15 10:12:16.336102] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.587 [2024-05-15 10:12:16.336358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2797156 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2797158 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.587 { 00:18:30.587 "params": { 00:18:30.587 "name": "Nvme$subsystem", 00:18:30.587 "trtype": "$TEST_TRANSPORT", 00:18:30.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.587 "adrfam": "ipv4", 00:18:30.587 "trsvcid": "$NVMF_PORT", 00:18:30.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.587 "hdgst": ${hdgst:-false}, 00:18:30.587 "ddgst": ${ddgst:-false} 00:18:30.587 }, 00:18:30.587 "method": "bdev_nvme_attach_controller" 00:18:30.587 } 00:18:30.587 EOF 00:18:30.587 )") 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2797160 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2797163 00:18:30.587 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.588 { 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme$subsystem", 00:18:30.588 "trtype": "$TEST_TRANSPORT", 00:18:30.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "$NVMF_PORT", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.588 "hdgst": ${hdgst:-false}, 00:18:30.588 "ddgst": ${ddgst:-false} 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 } 00:18:30.588 EOF 00:18:30.588 )") 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.588 { 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme$subsystem", 00:18:30.588 "trtype": "$TEST_TRANSPORT", 00:18:30.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "$NVMF_PORT", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.588 "hdgst": ${hdgst:-false}, 00:18:30.588 "ddgst": ${ddgst:-false} 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 } 00:18:30.588 EOF 00:18:30.588 )") 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:30.588 { 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme$subsystem", 00:18:30.588 "trtype": "$TEST_TRANSPORT", 00:18:30.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "$NVMF_PORT", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:30.588 "hdgst": ${hdgst:-false}, 00:18:30.588 "ddgst": ${ddgst:-false} 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 } 00:18:30.588 EOF 00:18:30.588 )") 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2797156 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme1", 00:18:30.588 "trtype": "tcp", 00:18:30.588 "traddr": "10.0.0.2", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "4420", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.588 "hdgst": false, 00:18:30.588 "ddgst": false 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 }' 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme1", 00:18:30.588 "trtype": "tcp", 00:18:30.588 "traddr": "10.0.0.2", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "4420", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.588 "hdgst": false, 00:18:30.588 "ddgst": false 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 }' 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme1", 00:18:30.588 "trtype": "tcp", 00:18:30.588 "traddr": "10.0.0.2", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "4420", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.588 "hdgst": false, 00:18:30.588 "ddgst": false 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 }' 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:30.588 10:12:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:30.588 "params": { 00:18:30.588 "name": "Nvme1", 00:18:30.588 "trtype": "tcp", 00:18:30.588 "traddr": "10.0.0.2", 00:18:30.588 "adrfam": "ipv4", 00:18:30.588 "trsvcid": "4420", 00:18:30.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.588 "hdgst": false, 00:18:30.588 "ddgst": false 00:18:30.588 }, 00:18:30.588 "method": "bdev_nvme_attach_controller" 00:18:30.588 }' 00:18:30.849 [2024-05-15 10:12:16.389112] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:30.850 [2024-05-15 10:12:16.389162] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:30.850 [2024-05-15 10:12:16.390918] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:30.850 [2024-05-15 10:12:16.390967] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:30.850 [2024-05-15 10:12:16.392110] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:30.850 [2024-05-15 10:12:16.392155] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:30.850 [2024-05-15 10:12:16.393342] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:30.850 [2024-05-15 10:12:16.393385] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:30.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.850 [2024-05-15 10:12:16.520462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.850 [2024-05-15 10:12:16.538609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:30.850 [2024-05-15 10:12:16.560500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.850 [2024-05-15 10:12:16.577513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:30.850 [2024-05-15 10:12:16.606950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.850 [2024-05-15 10:12:16.624907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:31.134 [2024-05-15 10:12:16.666221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.134 [2024-05-15 10:12:16.686362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:31.134 Running I/O for 1 seconds... 00:18:31.134 Running I/O for 1 seconds... 00:18:31.134 Running I/O for 1 seconds... 00:18:31.421 Running I/O for 1 seconds... 00:18:32.365 00:18:32.365 Latency(us) 00:18:32.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.365 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:32.365 Nvme1n1 : 1.00 188477.84 736.24 0.00 0.00 676.86 269.65 1378.99 00:18:32.365 =================================================================================================================== 00:18:32.365 Total : 188477.84 736.24 0.00 0.00 676.86 269.65 1378.99 00:18:32.365 00:18:32.365 Latency(us) 00:18:32.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.365 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:32.365 Nvme1n1 : 1.01 15119.92 59.06 0.00 0.00 8426.65 4396.37 21189.97 00:18:32.365 =================================================================================================================== 00:18:32.365 Total : 15119.92 59.06 0.00 0.00 8426.65 4396.37 21189.97 00:18:32.365 00:18:32.365 Latency(us) 00:18:32.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.365 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:32.365 Nvme1n1 : 1.08 2579.05 10.07 0.00 0.00 49499.83 2512.21 276125.01 00:18:32.365 =================================================================================================================== 00:18:32.365 Total : 2579.05 10.07 0.00 0.00 49499.83 2512.21 276125.01 00:18:32.365 00:18:32.365 Latency(us) 00:18:32.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.365 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:32.365 Nvme1n1 : 1.01 15287.19 59.72 0.00 0.00 8350.51 5079.04 20753.07 00:18:32.365 =================================================================================================================== 00:18:32.365 Total : 15287.19 59.72 0.00 0.00 8350.51 5079.04 20753.07 00:18:32.365 10:12:17 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2797158 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2797160 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2797163 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.365 rmmod nvme_tcp 00:18:32.365 rmmod nvme_fabrics 00:18:32.365 rmmod nvme_keyring 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2796802 ']' 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2796802 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 2796802 ']' 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 2796802 00:18:32.365 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2796802 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2796802' 00:18:32.626 killing process with pid 2796802 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 2796802 00:18:32.626 [2024-05-15 10:12:18.210019] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 2796802 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.626 10:12:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.173 10:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.173 00:18:35.173 real 0m12.455s 00:18:35.173 user 0m17.541s 00:18:35.173 sys 0m6.754s 00:18:35.173 10:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:35.173 10:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:35.173 ************************************ 00:18:35.173 END TEST nvmf_bdev_io_wait 00:18:35.173 ************************************ 00:18:35.173 10:12:20 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:35.173 10:12:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:35.173 10:12:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:35.173 10:12:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.173 ************************************ 00:18:35.173 START TEST nvmf_queue_depth 00:18:35.173 ************************************ 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:35.173 * Looking for test storage... 00:18:35.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.173 10:12:20 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.174 10:12:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:41.770 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:41.770 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:41.770 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:41.770 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:41.770 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:42.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:18:42.032 00:18:42.032 --- 10.0.0.2 ping statistics --- 00:18:42.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.032 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:18:42.032 00:18:42.032 --- 10.0.0.1 ping statistics --- 00:18:42.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.032 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:42.032 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:42.033 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.033 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:42.033 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2801667 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2801667 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2801667 ']' 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:42.295 10:12:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:42.295 [2024-05-15 10:12:27.918033] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:42.295 [2024-05-15 10:12:27.918084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.295 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.295 [2024-05-15 10:12:28.000926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.295 [2024-05-15 10:12:28.030883] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.295 [2024-05-15 10:12:28.030919] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.295 [2024-05-15 10:12:28.030926] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.295 [2024-05-15 10:12:28.030933] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.295 [2024-05-15 10:12:28.030938] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.295 [2024-05-15 10:12:28.030956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 [2024-05-15 10:12:28.726969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 Malloc0 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 [2024-05-15 10:12:28.784137] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:43.242 [2024-05-15 10:12:28.784385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2801861 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2801861 /var/tmp/bdevperf.sock 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 2801861 ']' 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:43.242 10:12:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.242 [2024-05-15 10:12:28.837849] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:18:43.242 [2024-05-15 10:12:28.837904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801861 ] 00:18:43.242 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.242 [2024-05-15 10:12:28.898823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.242 [2024-05-15 10:12:28.933063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.242 10:12:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:43.242 10:12:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:18:43.242 10:12:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:43.242 10:12:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.242 10:12:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:43.504 NVMe0n1 00:18:43.504 10:12:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.504 10:12:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:43.766 Running I/O for 10 seconds... 00:18:53.777 00:18:53.777 Latency(us) 00:18:53.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.777 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:53.777 Verification LBA range: start 0x0 length 0x4000 00:18:53.777 NVMe0n1 : 10.06 11159.74 43.59 0.00 0.00 91405.27 20753.07 75584.85 00:18:53.777 =================================================================================================================== 00:18:53.777 Total : 11159.74 43.59 0.00 0.00 91405.27 20753.07 75584.85 00:18:53.777 0 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2801861 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2801861 ']' 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2801861 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2801861 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2801861' 00:18:53.777 killing process with pid 2801861 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2801861 00:18:53.777 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.777 00:18:53.777 Latency(us) 00:18:53.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.777 =================================================================================================================== 00:18:53.777 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.777 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2801861 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.039 rmmod nvme_tcp 00:18:54.039 rmmod nvme_fabrics 00:18:54.039 rmmod nvme_keyring 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.039 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2801667 ']' 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2801667 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 2801667 ']' 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 2801667 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2801667 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2801667' 00:18:54.040 killing process with pid 2801667 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 2801667 00:18:54.040 [2024-05-15 10:12:39.743281] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:54.040 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 2801667 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.301 10:12:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.217 10:12:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.217 00:18:56.217 real 0m21.447s 00:18:56.217 user 0m24.446s 00:18:56.217 sys 0m6.438s 00:18:56.217 10:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:56.217 10:12:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:56.217 ************************************ 00:18:56.217 END TEST nvmf_queue_depth 00:18:56.217 ************************************ 00:18:56.217 10:12:41 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:56.217 10:12:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:56.217 10:12:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:56.217 10:12:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:56.479 ************************************ 00:18:56.479 START TEST nvmf_target_multipath 00:18:56.479 ************************************ 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:56.479 * Looking for test storage... 00:18:56.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:56.479 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:56.480 10:12:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.480 10:12:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:04.688 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:04.689 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:04.689 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:04.689 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:04.689 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.689 10:12:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:04.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:19:04.689 00:19:04.689 --- 10.0.0.2 ping statistics --- 00:19:04.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.689 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:19:04.689 00:19:04.689 --- 10.0.0.1 ping statistics --- 00:19:04.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.689 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:04.689 only one NIC for nvmf test 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.689 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.690 rmmod nvme_tcp 00:19:04.690 rmmod nvme_fabrics 00:19:04.690 rmmod nvme_keyring 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.690 10:12:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.637 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.900 00:19:05.900 real 0m9.436s 00:19:05.900 user 0m2.048s 00:19:05.900 sys 0m5.300s 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:05.900 10:12:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:05.900 ************************************ 00:19:05.900 END TEST nvmf_target_multipath 00:19:05.900 ************************************ 00:19:05.900 10:12:51 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:05.900 10:12:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:05.900 10:12:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:05.900 10:12:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:05.900 ************************************ 00:19:05.900 START TEST nvmf_zcopy 00:19:05.900 ************************************ 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:05.900 * Looking for test storage... 00:19:05.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.900 10:12:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:05.901 10:12:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:14.058 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:14.058 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:14.058 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:14.058 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.058 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:19:14.059 00:19:14.059 --- 10.0.0.2 ping statistics --- 00:19:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.059 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:19:14.059 00:19:14.059 --- 10.0.0.1 ping statistics --- 00:19:14.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.059 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2812182 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2812182 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 2812182 ']' 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:14.059 10:12:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.059 [2024-05-15 10:12:59.053960] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:14.059 [2024-05-15 10:12:59.054029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.059 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.059 [2024-05-15 10:12:59.143133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.059 [2024-05-15 10:12:59.188699] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.059 [2024-05-15 10:12:59.188752] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.059 [2024-05-15 10:12:59.188766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.059 [2024-05-15 10:12:59.188773] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.059 [2024-05-15 10:12:59.188779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.059 [2024-05-15 10:12:59.188801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.059 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:14.059 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:19:14.059 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:14.059 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:14.059 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.321 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 [2024-05-15 10:12:59.891689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 [2024-05-15 10:12:59.915683] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:14.322 [2024-05-15 10:12:59.915947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 malloc0 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:14.322 { 00:19:14.322 "params": { 00:19:14.322 "name": "Nvme$subsystem", 00:19:14.322 "trtype": "$TEST_TRANSPORT", 00:19:14.322 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.322 "adrfam": "ipv4", 00:19:14.322 "trsvcid": "$NVMF_PORT", 00:19:14.322 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.322 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.322 "hdgst": ${hdgst:-false}, 00:19:14.322 "ddgst": ${ddgst:-false} 00:19:14.322 }, 00:19:14.322 "method": "bdev_nvme_attach_controller" 00:19:14.322 } 00:19:14.322 EOF 00:19:14.322 )") 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:14.322 10:12:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:14.322 "params": { 00:19:14.322 "name": "Nvme1", 00:19:14.322 "trtype": "tcp", 00:19:14.322 "traddr": "10.0.0.2", 00:19:14.322 "adrfam": "ipv4", 00:19:14.322 "trsvcid": "4420", 00:19:14.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.322 "hdgst": false, 00:19:14.322 "ddgst": false 00:19:14.322 }, 00:19:14.322 "method": "bdev_nvme_attach_controller" 00:19:14.322 }' 00:19:14.322 [2024-05-15 10:13:00.023131] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:14.322 [2024-05-15 10:13:00.023209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812510 ] 00:19:14.322 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.322 [2024-05-15 10:13:00.093437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.583 [2024-05-15 10:13:00.142980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.583 Running I/O for 10 seconds... 00:19:24.598 00:19:24.598 Latency(us) 00:19:24.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.598 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:24.598 Verification LBA range: start 0x0 length 0x1000 00:19:24.598 Nvme1n1 : 10.05 9130.10 71.33 0.00 0.00 13918.64 3031.04 43909.12 00:19:24.598 =================================================================================================================== 00:19:24.598 Total : 9130.10 71.33 0.00 0.00 13918.64 3031.04 43909.12 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2814532 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:24.860 { 00:19:24.860 "params": { 00:19:24.860 "name": "Nvme$subsystem", 00:19:24.860 "trtype": "$TEST_TRANSPORT", 00:19:24.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:24.860 "adrfam": "ipv4", 00:19:24.860 "trsvcid": "$NVMF_PORT", 00:19:24.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:24.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:24.860 "hdgst": ${hdgst:-false}, 00:19:24.860 "ddgst": ${ddgst:-false} 00:19:24.860 }, 00:19:24.860 "method": "bdev_nvme_attach_controller" 00:19:24.860 } 00:19:24.860 EOF 00:19:24.860 )") 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:24.860 [2024-05-15 10:13:10.489555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.489582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:24.860 10:13:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:24.860 "params": { 00:19:24.860 "name": "Nvme1", 00:19:24.860 "trtype": "tcp", 00:19:24.860 "traddr": "10.0.0.2", 00:19:24.860 "adrfam": "ipv4", 00:19:24.860 "trsvcid": "4420", 00:19:24.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.860 "hdgst": false, 00:19:24.860 "ddgst": false 00:19:24.860 }, 00:19:24.860 "method": "bdev_nvme_attach_controller" 00:19:24.860 }' 00:19:24.860 [2024-05-15 10:13:10.501553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.501562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.513580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.513588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.525610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.525618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.536753] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:24.860 [2024-05-15 10:13:10.536809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2814532 ] 00:19:24.860 [2024-05-15 10:13:10.537643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.537651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.549674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.549682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.561703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.561710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.860 [2024-05-15 10:13:10.573733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.573742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.585764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.585773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.595481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.860 [2024-05-15 10:13:10.597797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.597804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.609830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.609843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.860 [2024-05-15 10:13:10.621861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.860 [2024-05-15 10:13:10.621874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.861 [2024-05-15 10:13:10.625450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.861 [2024-05-15 10:13:10.633890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.861 [2024-05-15 10:13:10.633898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.861 [2024-05-15 10:13:10.645925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.861 [2024-05-15 10:13:10.645938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.657952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.657962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.669983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.669995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.682013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.682021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.694047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.694058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.706078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.706089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.718110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.718120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.730140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.730149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.742171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.742179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.754203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.754211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.766236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.766245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.778268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.778278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.790301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.790309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.802338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.802353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 Running I/O for 5 seconds... 00:19:25.123 [2024-05-15 10:13:10.814370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.814379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.843189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.843206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.856016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.856031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.869690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.869707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.882605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.882621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.895373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.895388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.123 [2024-05-15 10:13:10.908323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.123 [2024-05-15 10:13:10.908339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.921077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.921097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.934327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.934342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.947183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.947198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.960395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.960410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.973480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.973496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.986234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.986251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:10.999811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:10.999827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.013075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.013091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.026043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.026058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.038835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.038850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.051538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.051553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.064042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.064058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.076856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.076873] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.089669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.089684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.102475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.102490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.114820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.114835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.128523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.128539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.141088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.141103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.156416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.156431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.385 [2024-05-15 10:13:11.171812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.385 [2024-05-15 10:13:11.171827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.186247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.186262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.198936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.198951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.212112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.212127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.225415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.225430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.238281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.238303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.251256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.251272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.264455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.264470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.277124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.277139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.290071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.290086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.302876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.302891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.316352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.316367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.647 [2024-05-15 10:13:11.328838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.647 [2024-05-15 10:13:11.328853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.341476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.341491] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.354361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.354376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.367963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.367978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.380765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.380780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.393768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.393785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.406226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.406241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.419201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.419217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.648 [2024-05-15 10:13:11.431792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.648 [2024-05-15 10:13:11.431807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.445059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.445075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.458506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.458522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.470774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.470790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.484170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.484186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.497591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.497607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.510429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.510445] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.523862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.523878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.536885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.536901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.549810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.549826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.562534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.562550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.575683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.575699] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.588605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.588620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.602278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.602297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.614116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.614132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.627374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.627390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.639885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.639900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.652602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.652617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.665356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.665373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.678109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.678125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.690316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.690331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.910 [2024-05-15 10:13:11.703267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.910 [2024-05-15 10:13:11.703283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.716298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.716314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.728805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.728820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.741714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.741731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.754887] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.754903] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.768629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.768646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.781083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.781098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.792779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.792795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.805526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.805541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.818374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.818390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.831404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.831420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.843301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.843317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.856737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.856753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.870018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.870034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.883554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.883570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.896348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.896368] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.909025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.909041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.922452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.922468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.935859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.935876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.949513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.949528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.173 [2024-05-15 10:13:11.962472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.173 [2024-05-15 10:13:11.962488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:11.975812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:11.975828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:11.988800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:11.988815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.001586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.001601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.014946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.014962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.028164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.028180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.041675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.041691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.054493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.054509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.067609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.067625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.080719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.080736] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.093494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.093509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.106893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.106909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.121108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.121124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.435 [2024-05-15 10:13:12.132714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.435 [2024-05-15 10:13:12.132730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.145390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.145409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.158049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.158064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.170872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.170887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.183760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.183775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.197992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.198008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.212465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.212479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.436 [2024-05-15 10:13:12.225243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.436 [2024-05-15 10:13:12.225258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.240538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.240553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.255306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.255322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.269806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.269821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.285732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.285747] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.299982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.299997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.313774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.313789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.325889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.325904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.338960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.338974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.351677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.351692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.365995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.366010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.380383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.380400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.394468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.394482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.409444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.409463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.422874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.422889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.435797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.435811] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.448811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.448826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.461633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.461649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.475230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.475244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.698 [2024-05-15 10:13:12.488389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.698 [2024-05-15 10:13:12.488404] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.501155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.501171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.514005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.514020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.527425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.527440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.540345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.540360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.553512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.553528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.566515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.566531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.579962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.579977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.591606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.591622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.604318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.604333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.617903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.617919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.629480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.629496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.643344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.643359] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.658374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.658392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.672262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.672277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.685217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.685232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.699131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.699146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.714225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.714240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.726917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.726932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.960 [2024-05-15 10:13:12.740619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.960 [2024-05-15 10:13:12.740634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.755946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.755962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.769561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.769576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.782617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.782633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.796336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.796351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.810031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.810046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.822310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.822325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.835561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.835576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.847642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.847657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.861063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.861078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.876236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.876250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.889822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.889837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.902154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.902169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.917133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.917152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.931597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.931613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.945259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.945274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.222 [2024-05-15 10:13:12.959174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.222 [2024-05-15 10:13:12.959189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.223 [2024-05-15 10:13:12.972043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.223 [2024-05-15 10:13:12.972058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.223 [2024-05-15 10:13:12.984849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.223 [2024-05-15 10:13:12.984864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.223 [2024-05-15 10:13:12.998235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.223 [2024-05-15 10:13:12.998250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.223 [2024-05-15 10:13:13.011778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.223 [2024-05-15 10:13:13.011793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.023392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.023407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.037680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.037694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.052111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.052127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.064811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.064826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.078222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.078237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.091043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.091058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.104326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.104340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.117056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.117072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.130163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.130179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.143362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.143378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.156254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.156270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.170082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.170097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.184990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.185006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.198990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.199005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.212046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.212062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.225380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.225395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.238039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.238054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.250778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.250793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.263646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.263660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.277230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.277245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.290186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.290201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.519 [2024-05-15 10:13:13.303572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.519 [2024-05-15 10:13:13.303588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.317432] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.317448] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.332088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.332103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.344925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.344941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.357215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.357229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.370462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.370478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.383412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.383427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.396283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.396303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.410068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.410083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.425396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.425411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.438274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.438289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.451047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.451062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.463616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.463632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.476423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.476438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.489683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.489698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.502450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.502465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.516630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.516646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.531390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.531405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.545141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.545156] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.558080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.558095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:27.781 [2024-05-15 10:13:13.572738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:27.781 [2024-05-15 10:13:13.572753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.586748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.586763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.599734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.599750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.612743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.612758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.625552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.625567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.638119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.638134] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.651835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.651850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.665128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.665143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.677839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.677855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.042 [2024-05-15 10:13:13.690706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.042 [2024-05-15 10:13:13.690722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.703297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.703313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.716833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.716849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.730234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.730249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.743179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.743195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.756215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.756230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.769287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.769306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.782569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.782585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.796274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.796289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.809284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.809304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.822592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.822607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.043 [2024-05-15 10:13:13.835899] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.043 [2024-05-15 10:13:13.835915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.848678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.848693] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.861557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.861572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.875123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.875138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.888633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.888648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.901492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.901507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.914533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.914552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.927388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.927403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.940102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.940118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.954296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.304 [2024-05-15 10:13:13.954311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.304 [2024-05-15 10:13:13.970067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:13.970083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:13.985290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:13.985308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:13.997991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:13.998006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.011581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.011596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.025623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.025637] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.038467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.038482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.052269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.052284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.067101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.067116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.080363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.080378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.305 [2024-05-15 10:13:14.094103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.305 [2024-05-15 10:13:14.094118] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.106315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.106331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.119651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.119666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.132646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.132661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.145490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.145506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.158457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.158474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.171586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.171608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.184630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.184647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.197807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.197823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.210642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.210657] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.223274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.223294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.235907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.235922] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.249577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.249593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.262397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.262412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.275447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.275462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.288748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.288763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.301669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.301685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.314507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.314522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.327326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.327341] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.339973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.339988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.567 [2024-05-15 10:13:14.352754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.567 [2024-05-15 10:13:14.352769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.365953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.365969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.379016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.379031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.391813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.391828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.404711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.404726] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.417585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.417605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.430776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.430791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.443535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.443550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.456662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.456677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.471258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.471273] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.485086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.485101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.498316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.498332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.511311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.511326] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.524224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.524239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.537664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.537679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.551594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.551609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.566188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.566204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.579798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.579815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.592940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.592955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.607592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.607607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:28.829 [2024-05-15 10:13:14.622570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:28.829 [2024-05-15 10:13:14.622585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.635965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.635980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.649382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.649397] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.662400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.662415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.676076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.676095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.688094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.688109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.701054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.701069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.714408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.714423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.727294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.727308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.740344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.740358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.753252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.753267] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.766081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.766096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.779658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.779673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.792843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.792858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.805623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.805639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.818553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.818568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.831224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.831240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.844245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.844260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.857428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.857443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.090 [2024-05-15 10:13:14.870124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.090 [2024-05-15 10:13:14.870141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.886262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.886277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.899513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.899529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.912611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.912627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.925696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.925715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.938396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.938411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.951304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.951319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.964143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.964158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.977080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.977096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:14.990270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:14.990286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.003408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.003423] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.016202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.016217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.028819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.028834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.040833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.040849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.053843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.053858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.066462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.066477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.079215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.079230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.092310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.092327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.105413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.105429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.117938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.117953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.131219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.131235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.352 [2024-05-15 10:13:15.144091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.352 [2024-05-15 10:13:15.144108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.157342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.157357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.169606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.169622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.182406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.182421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.191880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.191895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.206469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.206484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.219135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.219151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.231934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.231949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.245393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.245408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.258485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.258500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.271412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.271427] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.283863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.283878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.296685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.296700] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.309856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.309872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.321980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.321995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.335204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.335219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.348447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.348462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.362006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.362021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.375395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.375411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.388211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.388226] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.614 [2024-05-15 10:13:15.400835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.614 [2024-05-15 10:13:15.400850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.414005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.414021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.426645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.426659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.439700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.439715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.452929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.452944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.466588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.466603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.479455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.479470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.492195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.492210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.505092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.505108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.518612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.518627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.532172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.532187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.545753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.545769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.558898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.558914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.571857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.571872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.584850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.584865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.597283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.597304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.610881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.610896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.624032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.624046] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.637560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.637575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.650480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.650495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:29.876 [2024-05-15 10:13:15.663709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:29.876 [2024-05-15 10:13:15.663725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.676950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.676966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.690196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.690211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.703194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.703209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.716135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.716150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.729114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.729129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.742049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.742064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.755374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.755389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.768785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.768800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.781865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.781880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.794716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.794731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.807487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.807502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.820474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.820489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 00:19:30.137 Latency(us) 00:19:30.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.137 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:30.137 Nvme1n1 : 5.01 18853.60 147.29 0.00 0.00 6782.97 2512.21 41724.59 00:19:30.137 =================================================================================================================== 00:19:30.137 Total : 18853.60 147.29 0.00 0.00 6782.97 2512.21 41724.59 00:19:30.137 [2024-05-15 10:13:15.830340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.830354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.842368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.842379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.854401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.854417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.866430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.866441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.137 [2024-05-15 10:13:15.878462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.137 [2024-05-15 10:13:15.878472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.138 [2024-05-15 10:13:15.890489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.138 [2024-05-15 10:13:15.890499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.138 [2024-05-15 10:13:15.902519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.138 [2024-05-15 10:13:15.902527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.138 [2024-05-15 10:13:15.914550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.138 [2024-05-15 10:13:15.914560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.138 [2024-05-15 10:13:15.926584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.138 [2024-05-15 10:13:15.926594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.399 [2024-05-15 10:13:15.938611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:30.399 [2024-05-15 10:13:15.938620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:30.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2814532) - No such process 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2814532 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:30.399 delay0 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.399 10:13:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:30.399 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.399 [2024-05-15 10:13:16.081295] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:36.989 Initializing NVMe Controllers 00:19:36.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:36.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:36.989 Initialization complete. Launching workers. 00:19:36.989 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:19:36.989 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 377, failed to submit 34 00:19:36.989 success 168, unsuccess 209, failed 0 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:36.989 rmmod nvme_tcp 00:19:36.989 rmmod nvme_fabrics 00:19:36.989 rmmod nvme_keyring 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2812182 ']' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2812182 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 2812182 ']' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 2812182 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2812182 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2812182' 00:19:36.989 killing process with pid 2812182 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 2812182 00:19:36.989 [2024-05-15 10:13:22.532455] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 2812182 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.989 10:13:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.538 10:13:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:39.538 00:19:39.538 real 0m33.176s 00:19:39.538 user 0m44.213s 00:19:39.538 sys 0m10.405s 00:19:39.538 10:13:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:39.538 10:13:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:39.538 ************************************ 00:19:39.538 END TEST nvmf_zcopy 00:19:39.538 ************************************ 00:19:39.538 10:13:24 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:39.538 10:13:24 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:39.538 10:13:24 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:39.538 10:13:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:39.538 ************************************ 00:19:39.538 START TEST nvmf_nmic 00:19:39.538 ************************************ 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:39.538 * Looking for test storage... 00:19:39.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.538 10:13:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.137 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:46.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:46.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:46.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:46.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:46.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:19:46.138 00:19:46.138 --- 10.0.0.2 ping statistics --- 00:19:46.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.138 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:19:46.138 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:19:46.400 00:19:46.400 --- 10.0.0.1 ping statistics --- 00:19:46.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.401 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2820872 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2820872 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 2820872 ']' 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:46.401 10:13:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:46.401 [2024-05-15 10:13:32.045171] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:19:46.401 [2024-05-15 10:13:32.045234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.401 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.401 [2024-05-15 10:13:32.116375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.401 [2024-05-15 10:13:32.157087] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.401 [2024-05-15 10:13:32.157136] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.401 [2024-05-15 10:13:32.157143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.401 [2024-05-15 10:13:32.157150] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.401 [2024-05-15 10:13:32.157156] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.401 [2024-05-15 10:13:32.157314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.401 [2024-05-15 10:13:32.157425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.401 [2024-05-15 10:13:32.157718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.401 [2024-05-15 10:13:32.157718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 [2024-05-15 10:13:32.872062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 Malloc0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 [2024-05-15 10:13:32.928636] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:47.345 [2024-05-15 10:13:32.928876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:47.345 test case1: single bdev can't be used in multiple subsystems 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.345 [2024-05-15 10:13:32.964774] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:47.345 [2024-05-15 10:13:32.964791] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:47.345 [2024-05-15 10:13:32.964798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.345 request: 00:19:47.345 { 00:19:47.345 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.345 "namespace": { 00:19:47.345 "bdev_name": "Malloc0", 00:19:47.345 "no_auto_visible": false 00:19:47.345 }, 00:19:47.345 "method": "nvmf_subsystem_add_ns", 00:19:47.345 "req_id": 1 00:19:47.345 } 00:19:47.345 Got JSON-RPC error response 00:19:47.345 response: 00:19:47.345 { 00:19:47.345 "code": -32602, 00:19:47.345 "message": "Invalid parameters" 00:19:47.345 } 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:47.345 Adding namespace failed - expected result. 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:47.345 test case2: host connect to nvmf target in multiple paths 00:19:47.345 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:47.346 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:47.346 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:47.346 [2024-05-15 10:13:32.976904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:47.346 10:13:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:47.346 10:13:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:49.262 10:13:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:50.650 10:13:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:50.650 10:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:19:50.650 10:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.650 10:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:19:50.650 10:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:19:52.664 10:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:52.664 [global] 00:19:52.664 thread=1 00:19:52.664 invalidate=1 00:19:52.664 rw=write 00:19:52.664 time_based=1 00:19:52.664 runtime=1 00:19:52.664 ioengine=libaio 00:19:52.664 direct=1 00:19:52.664 bs=4096 00:19:52.664 iodepth=1 00:19:52.664 norandommap=0 00:19:52.664 numjobs=1 00:19:52.664 00:19:52.664 verify_dump=1 00:19:52.664 verify_backlog=512 00:19:52.664 verify_state_save=0 00:19:52.664 do_verify=1 00:19:52.664 verify=crc32c-intel 00:19:52.664 [job0] 00:19:52.664 filename=/dev/nvme0n1 00:19:52.664 Could not set queue depth (nvme0n1) 00:19:52.925 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:52.925 fio-3.35 00:19:52.925 Starting 1 thread 00:19:53.869 00:19:53.869 job0: (groupid=0, jobs=1): err= 0: pid=2822411: Wed May 15 10:13:39 2024 00:19:53.869 read: IOPS=293, BW=1175KiB/s (1203kB/s)(1176KiB/1001msec) 00:19:53.869 slat (nsec): min=24698, max=86112, avg=26433.35, stdev=4666.12 00:19:53.869 clat (usec): min=1134, max=1621, avg=1384.67, stdev=98.56 00:19:53.869 lat (usec): min=1160, max=1646, avg=1411.11, stdev=98.80 00:19:53.869 clat percentiles (usec): 00:19:53.869 | 1.00th=[ 1156], 5.00th=[ 1221], 10.00th=[ 1254], 20.00th=[ 1270], 00:19:53.869 | 30.00th=[ 1303], 40.00th=[ 1385], 50.00th=[ 1418], 60.00th=[ 1434], 00:19:53.869 | 70.00th=[ 1450], 80.00th=[ 1467], 90.00th=[ 1500], 95.00th=[ 1516], 00:19:53.869 | 99.00th=[ 1549], 99.50th=[ 1582], 99.90th=[ 1614], 99.95th=[ 1614], 00:19:53.869 | 99.99th=[ 1614] 00:19:53.869 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:53.869 slat (nsec): min=13288, max=71380, avg=35616.40, stdev=3878.60 00:19:53.869 clat (usec): min=804, max=1288, avg=1093.81, stdev=72.95 00:19:53.869 lat (usec): min=839, max=1323, avg=1129.42, stdev=72.60 00:19:53.869 clat percentiles (usec): 00:19:53.869 | 1.00th=[ 898], 5.00th=[ 988], 10.00th=[ 1004], 20.00th=[ 1029], 00:19:53.869 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:19:53.869 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1221], 00:19:53.869 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287], 00:19:53.869 | 99.99th=[ 1287] 00:19:53.869 bw ( KiB/s): min= 3520, max= 3520, per=100.00%, avg=3520.00, stdev= 0.00, samples=1 00:19:53.869 iops : min= 880, max= 880, avg=880.00, stdev= 0.00, samples=1 00:19:53.869 lat (usec) : 1000=4.71% 00:19:53.869 lat (msec) : 2=95.29% 00:19:53.869 cpu : usr=2.10%, sys=2.90%, ctx=809, majf=0, minf=1 00:19:53.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.869 issued rwts: total=294,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.869 00:19:53.869 Run status group 0 (all jobs): 00:19:53.869 READ: bw=1175KiB/s (1203kB/s), 1175KiB/s-1175KiB/s (1203kB/s-1203kB/s), io=1176KiB (1204kB), run=1001-1001msec 00:19:53.869 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:19:53.869 00:19:53.869 Disk stats (read/write): 00:19:53.869 nvme0n1: ios=276/512, merge=0/0, ticks=762/516, in_queue=1278, util=95.39% 00:19:53.869 10:13:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:54.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.131 rmmod nvme_tcp 00:19:54.131 rmmod nvme_fabrics 00:19:54.131 rmmod nvme_keyring 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2820872 ']' 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2820872 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 2820872 ']' 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 2820872 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:54.131 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2820872 00:19:54.394 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:54.394 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:54.394 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2820872' 00:19:54.394 killing process with pid 2820872 00:19:54.394 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 2820872 00:19:54.394 [2024-05-15 10:13:39.963370] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:54.394 10:13:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 2820872 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.394 10:13:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.990 10:13:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.990 00:19:56.990 real 0m17.378s 00:19:56.990 user 0m51.797s 00:19:56.990 sys 0m6.051s 00:19:56.990 10:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:56.990 10:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:56.990 ************************************ 00:19:56.990 END TEST nvmf_nmic 00:19:56.990 ************************************ 00:19:56.990 10:13:42 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:56.990 10:13:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:56.990 10:13:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:56.990 10:13:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:56.990 ************************************ 00:19:56.990 START TEST nvmf_fio_target 00:19:56.990 ************************************ 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:56.990 * Looking for test storage... 00:19:56.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:56.990 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.991 10:13:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:20:03.589 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:03.590 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:03.590 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:03.590 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:03.590 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.590 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:20:03.852 00:20:03.852 --- 10.0.0.2 ping statistics --- 00:20:03.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.852 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.531 ms 00:20:03.852 00:20:03.852 --- 10.0.0.1 ping statistics --- 00:20:03.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.852 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:03.852 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2826741 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2826741 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 2826741 ']' 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:03.853 10:13:49 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.114 [2024-05-15 10:13:49.687740] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:20:04.114 [2024-05-15 10:13:49.687806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.114 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.114 [2024-05-15 10:13:49.759250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.114 [2024-05-15 10:13:49.799272] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.114 [2024-05-15 10:13:49.799324] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.114 [2024-05-15 10:13:49.799332] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.114 [2024-05-15 10:13:49.799339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.114 [2024-05-15 10:13:49.799345] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.114 [2024-05-15 10:13:49.799550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.114 [2024-05-15 10:13:49.799668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.114 [2024-05-15 10:13:49.799826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.114 [2024-05-15 10:13:49.799827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.687 10:13:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:04.687 10:13:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:20:04.687 10:13:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.687 10:13:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:04.687 10:13:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.948 10:13:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.948 10:13:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:04.948 [2024-05-15 10:13:50.645368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.949 10:13:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.209 10:13:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:05.209 10:13:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.471 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:05.471 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.471 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:05.471 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.732 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:05.732 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:05.993 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:05.993 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:05.993 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.255 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:06.255 10:13:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:06.517 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:06.517 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:06.517 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:06.778 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:06.778 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.039 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:07.039 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:07.040 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.301 [2024-05-15 10:13:52.930418] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:07.301 [2024-05-15 10:13:52.930705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.301 10:13:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:07.562 10:13:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:07.562 10:13:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:09.478 10:13:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:09.478 10:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:20:09.478 10:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:20:09.478 10:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:20:09.478 10:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:20:09.478 10:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:20:11.395 10:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:11.395 [global] 00:20:11.395 thread=1 00:20:11.395 invalidate=1 00:20:11.395 rw=write 00:20:11.395 time_based=1 00:20:11.395 runtime=1 00:20:11.395 ioengine=libaio 00:20:11.395 direct=1 00:20:11.395 bs=4096 00:20:11.395 iodepth=1 00:20:11.395 norandommap=0 00:20:11.395 numjobs=1 00:20:11.395 00:20:11.395 verify_dump=1 00:20:11.395 verify_backlog=512 00:20:11.395 verify_state_save=0 00:20:11.395 do_verify=1 00:20:11.395 verify=crc32c-intel 00:20:11.395 [job0] 00:20:11.395 filename=/dev/nvme0n1 00:20:11.395 [job1] 00:20:11.395 filename=/dev/nvme0n2 00:20:11.395 [job2] 00:20:11.395 filename=/dev/nvme0n3 00:20:11.395 [job3] 00:20:11.395 filename=/dev/nvme0n4 00:20:11.395 Could not set queue depth (nvme0n1) 00:20:11.395 Could not set queue depth (nvme0n2) 00:20:11.395 Could not set queue depth (nvme0n3) 00:20:11.395 Could not set queue depth (nvme0n4) 00:20:11.656 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:11.656 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:11.656 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:11.656 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:11.656 fio-3.35 00:20:11.656 Starting 4 threads 00:20:13.080 00:20:13.080 job0: (groupid=0, jobs=1): err= 0: pid=2828534: Wed May 15 10:13:58 2024 00:20:13.080 read: IOPS=305, BW=1222KiB/s (1251kB/s)(1224KiB/1002msec) 00:20:13.080 slat (nsec): min=23418, max=60070, avg=24296.37, stdev=3257.57 00:20:13.080 clat (usec): min=1096, max=42756, avg=1487.06, stdev=2367.80 00:20:13.080 lat (usec): min=1120, max=42780, avg=1511.35, stdev=2367.79 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[ 1188], 5.00th=[ 1221], 10.00th=[ 1270], 20.00th=[ 1319], 00:20:13.080 | 30.00th=[ 1336], 40.00th=[ 1352], 50.00th=[ 1352], 60.00th=[ 1369], 00:20:13.080 | 70.00th=[ 1385], 80.00th=[ 1401], 90.00th=[ 1418], 95.00th=[ 1434], 00:20:13.080 | 99.00th=[ 1516], 99.50th=[ 1549], 99.90th=[42730], 99.95th=[42730], 00:20:13.080 | 99.99th=[42730] 00:20:13.080 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:20:13.080 slat (nsec): min=29801, max=81172, avg=31527.79, stdev=2999.81 00:20:13.080 clat (usec): min=719, max=1347, avg=1007.67, stdev=106.95 00:20:13.080 lat (usec): min=750, max=1381, avg=1039.20, stdev=107.13 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[ 791], 5.00th=[ 816], 10.00th=[ 848], 20.00th=[ 922], 00:20:13.080 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1045], 00:20:13.080 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:20:13.080 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1352], 00:20:13.080 | 99.99th=[ 1352] 00:20:13.080 bw ( KiB/s): min= 280, max= 3816, per=26.05%, avg=2048.00, stdev=2500.33, samples=2 00:20:13.080 iops : min= 70, max= 954, avg=512.00, stdev=625.08, samples=2 00:20:13.080 lat (usec) : 750=0.24%, 1000=28.00% 00:20:13.080 lat (msec) : 2=71.64%, 50=0.12% 00:20:13.080 cpu : usr=0.60%, sys=3.10%, ctx=819, majf=0, minf=1 00:20:13.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 issued rwts: total=306,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.080 job1: (groupid=0, jobs=1): err= 0: pid=2828560: Wed May 15 10:13:58 2024 00:20:13.080 read: IOPS=10, BW=42.5KiB/s (43.5kB/s)(44.0KiB/1035msec) 00:20:13.080 slat (nsec): min=26482, max=27261, avg=26834.36, stdev=281.70 00:20:13.080 clat (usec): min=41870, max=42291, avg=42004.34, stdev=113.38 00:20:13.080 lat (usec): min=41896, max=42318, avg=42031.17, stdev=113.53 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:13.080 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:13.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:13.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:13.080 | 99.99th=[42206] 00:20:13.080 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:20:13.080 slat (nsec): min=11253, max=54071, avg=35317.31, stdev=3057.51 00:20:13.080 clat (usec): min=727, max=2309, avg=1074.79, stdev=134.98 00:20:13.080 lat (usec): min=761, max=2349, avg=1110.11, stdev=135.27 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[ 783], 5.00th=[ 865], 10.00th=[ 906], 20.00th=[ 963], 00:20:13.080 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[ 1090], 60.00th=[ 1123], 00:20:13.080 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1237], 95.00th=[ 1254], 00:20:13.080 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 2311], 99.95th=[ 2311], 00:20:13.080 | 99.99th=[ 2311] 00:20:13.080 bw ( KiB/s): min= 520, max= 3576, per=26.05%, avg=2048.00, stdev=2160.92, samples=2 00:20:13.080 iops : min= 130, max= 894, avg=512.00, stdev=540.23, samples=2 00:20:13.080 lat (usec) : 750=0.57%, 1000=28.30% 00:20:13.080 lat (msec) : 2=68.83%, 4=0.19%, 50=2.10% 00:20:13.080 cpu : usr=1.45%, sys=1.84%, ctx=526, majf=0, minf=1 00:20:13.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 issued rwts: total=11,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.080 job2: (groupid=0, jobs=1): err= 0: pid=2828580: Wed May 15 10:13:58 2024 00:20:13.080 read: IOPS=11, BW=46.1KiB/s (47.2kB/s)(48.0KiB/1042msec) 00:20:13.080 slat (nsec): min=25331, max=31900, avg=26139.67, stdev=1826.23 00:20:13.080 clat (usec): min=41890, max=42069, avg=41977.53, stdev=48.53 00:20:13.080 lat (usec): min=41922, max=42095, avg=42003.67, stdev=47.56 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:20:13.080 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:13.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:13.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:13.080 | 99.99th=[42206] 00:20:13.080 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:20:13.080 slat (nsec): min=10202, max=51878, avg=33156.83, stdev=3691.12 00:20:13.080 clat (usec): min=705, max=1502, avg=1010.20, stdev=101.14 00:20:13.080 lat (usec): min=738, max=1520, avg=1043.36, stdev=100.82 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 930], 00:20:13.080 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1020], 00:20:13.080 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1205], 00:20:13.080 | 99.00th=[ 1270], 99.50th=[ 1352], 99.90th=[ 1500], 99.95th=[ 1500], 00:20:13.080 | 99.99th=[ 1500] 00:20:13.080 bw ( KiB/s): min= 296, max= 3800, per=26.05%, avg=2048.00, stdev=2477.70, samples=2 00:20:13.080 iops : min= 74, max= 950, avg=512.00, stdev=619.43, samples=2 00:20:13.080 lat (usec) : 750=0.19%, 1000=50.19% 00:20:13.080 lat (msec) : 2=47.33%, 50=2.29% 00:20:13.080 cpu : usr=1.34%, sys=1.73%, ctx=524, majf=0, minf=1 00:20:13.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.080 job3: (groupid=0, jobs=1): err= 0: pid=2828588: Wed May 15 10:13:58 2024 00:20:13.080 read: IOPS=47, BW=190KiB/s (194kB/s)(196KiB/1034msec) 00:20:13.080 slat (nsec): min=10489, max=58556, avg=25123.82, stdev=5318.24 00:20:13.080 clat (usec): min=1525, max=42794, avg=9113.67, stdev=15789.96 00:20:13.080 lat (usec): min=1550, max=42805, avg=9138.79, stdev=15788.92 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[ 1532], 5.00th=[ 1598], 10.00th=[ 1598], 20.00th=[ 1663], 00:20:13.080 | 30.00th=[ 1696], 40.00th=[ 1713], 50.00th=[ 1729], 60.00th=[ 1745], 00:20:13.080 | 70.00th=[ 1762], 80.00th=[ 1876], 90.00th=[42206], 95.00th=[42206], 00:20:13.080 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:13.080 | 99.99th=[42730] 00:20:13.080 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:20:13.080 slat (nsec): min=30964, max=50548, avg=32552.70, stdev=2863.17 00:20:13.080 clat (usec): min=807, max=1584, avg=1103.78, stdev=104.69 00:20:13.080 lat (usec): min=840, max=1616, avg=1136.33, stdev=104.79 00:20:13.080 clat percentiles (usec): 00:20:13.080 | 1.00th=[ 881], 5.00th=[ 955], 10.00th=[ 971], 20.00th=[ 1029], 00:20:13.080 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1123], 00:20:13.080 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:20:13.080 | 99.00th=[ 1401], 99.50th=[ 1483], 99.90th=[ 1582], 99.95th=[ 1582], 00:20:13.080 | 99.99th=[ 1582] 00:20:13.080 bw ( KiB/s): min= 616, max= 3480, per=26.05%, avg=2048.00, stdev=2025.15, samples=2 00:20:13.080 iops : min= 154, max= 870, avg=512.00, stdev=506.29, samples=2 00:20:13.080 lat (usec) : 1000=11.94% 00:20:13.080 lat (msec) : 2=86.45%, 50=1.60% 00:20:13.080 cpu : usr=0.68%, sys=1.94%, ctx=561, majf=0, minf=1 00:20:13.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.080 issued rwts: total=49,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.080 00:20:13.080 Run status group 0 (all jobs): 00:20:13.080 READ: bw=1451KiB/s (1486kB/s), 42.5KiB/s-1222KiB/s (43.5kB/s-1251kB/s), io=1512KiB (1548kB), run=1002-1042msec 00:20:13.080 WRITE: bw=7862KiB/s (8050kB/s), 1965KiB/s-2044KiB/s (2013kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1042msec 00:20:13.080 00:20:13.080 Disk stats (read/write): 00:20:13.080 nvme0n1: ios=224/512, merge=0/0, ticks=458/522, in_queue=980, util=91.08% 00:20:13.080 nvme0n2: ios=56/512, merge=0/0, ticks=780/511, in_queue=1291, util=99.69% 00:20:13.080 nvme0n3: ios=6/512, merge=0/0, ticks=252/487, in_queue=739, util=87.88% 00:20:13.080 nvme0n4: ios=43/512, merge=0/0, ticks=192/545, in_queue=737, util=89.22% 00:20:13.080 10:13:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:13.080 [global] 00:20:13.080 thread=1 00:20:13.080 invalidate=1 00:20:13.080 rw=randwrite 00:20:13.080 time_based=1 00:20:13.080 runtime=1 00:20:13.080 ioengine=libaio 00:20:13.080 direct=1 00:20:13.080 bs=4096 00:20:13.080 iodepth=1 00:20:13.080 norandommap=0 00:20:13.080 numjobs=1 00:20:13.080 00:20:13.080 verify_dump=1 00:20:13.080 verify_backlog=512 00:20:13.080 verify_state_save=0 00:20:13.080 do_verify=1 00:20:13.080 verify=crc32c-intel 00:20:13.080 [job0] 00:20:13.080 filename=/dev/nvme0n1 00:20:13.080 [job1] 00:20:13.080 filename=/dev/nvme0n2 00:20:13.080 [job2] 00:20:13.080 filename=/dev/nvme0n3 00:20:13.080 [job3] 00:20:13.080 filename=/dev/nvme0n4 00:20:13.080 Could not set queue depth (nvme0n1) 00:20:13.080 Could not set queue depth (nvme0n2) 00:20:13.080 Could not set queue depth (nvme0n3) 00:20:13.080 Could not set queue depth (nvme0n4) 00:20:13.342 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.342 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.342 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.342 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.342 fio-3.35 00:20:13.342 Starting 4 threads 00:20:14.728 00:20:14.728 job0: (groupid=0, jobs=1): err= 0: pid=2829033: Wed May 15 10:14:00 2024 00:20:14.728 read: IOPS=228, BW=915KiB/s (937kB/s)(916KiB/1001msec) 00:20:14.728 slat (nsec): min=23961, max=60910, avg=24941.11, stdev=3944.97 00:20:14.728 clat (usec): min=1505, max=2621, avg=1718.95, stdev=90.00 00:20:14.728 lat (usec): min=1529, max=2646, avg=1743.89, stdev=90.04 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 1532], 5.00th=[ 1647], 10.00th=[ 1647], 20.00th=[ 1680], 00:20:14.728 | 30.00th=[ 1680], 40.00th=[ 1696], 50.00th=[ 1713], 60.00th=[ 1713], 00:20:14.728 | 70.00th=[ 1745], 80.00th=[ 1762], 90.00th=[ 1778], 95.00th=[ 1827], 00:20:14.728 | 99.00th=[ 1942], 99.50th=[ 2278], 99.90th=[ 2606], 99.95th=[ 2606], 00:20:14.728 | 99.99th=[ 2606] 00:20:14.728 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:14.728 slat (nsec): min=10256, max=70042, avg=31167.17, stdev=4113.01 00:20:14.728 clat (usec): min=900, max=1315, avg=1130.40, stdev=68.52 00:20:14.728 lat (usec): min=930, max=1359, avg=1161.57, stdev=68.34 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 971], 5.00th=[ 1020], 10.00th=[ 1045], 20.00th=[ 1074], 00:20:14.728 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:20:14.728 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1205], 95.00th=[ 1237], 00:20:14.728 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1319], 99.95th=[ 1319], 00:20:14.728 | 99.99th=[ 1319] 00:20:14.728 bw ( KiB/s): min= 3408, max= 3408, per=43.14%, avg=3408.00, stdev= 0.00, samples=1 00:20:14.728 iops : min= 852, max= 852, avg=852.00, stdev= 0.00, samples=1 00:20:14.728 lat (usec) : 1000=1.62% 00:20:14.728 lat (msec) : 2=98.11%, 4=0.27% 00:20:14.728 cpu : usr=1.10%, sys=2.30%, ctx=742, majf=0, minf=1 00:20:14.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 issued rwts: total=229,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:14.728 job1: (groupid=0, jobs=1): err= 0: pid=2829045: Wed May 15 10:14:00 2024 00:20:14.728 read: IOPS=14, BW=57.9KiB/s (59.3kB/s)(60.0KiB/1036msec) 00:20:14.728 slat (nsec): min=26036, max=26532, avg=26235.60, stdev=136.55 00:20:14.728 clat (usec): min=1359, max=42488, avg=33847.19, stdev=16799.96 00:20:14.728 lat (usec): min=1385, max=42514, avg=33873.42, stdev=16799.98 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[ 1385], 20.00th=[ 1434], 00:20:14.728 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:14.728 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:20:14.728 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:14.728 | 99.99th=[42730] 00:20:14.728 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:20:14.728 slat (nsec): min=32007, max=52783, avg=34391.74, stdev=4244.71 00:20:14.728 clat (usec): min=763, max=1289, avg=984.30, stdev=82.50 00:20:14.728 lat (usec): min=796, max=1322, avg=1018.69, stdev=82.71 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 799], 5.00th=[ 832], 10.00th=[ 848], 20.00th=[ 889], 00:20:14.728 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:20:14.728 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1074], 00:20:14.728 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1287], 99.95th=[ 1287], 00:20:14.728 | 99.99th=[ 1287] 00:20:14.728 bw ( KiB/s): min= 216, max= 3880, per=25.93%, avg=2048.00, stdev=2590.84, samples=2 00:20:14.728 iops : min= 54, max= 970, avg=512.00, stdev=647.71, samples=2 00:20:14.728 lat (usec) : 1000=42.50% 00:20:14.728 lat (msec) : 2=55.22%, 50=2.28% 00:20:14.728 cpu : usr=0.97%, sys=1.55%, ctx=530, majf=0, minf=1 00:20:14.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:14.728 job2: (groupid=0, jobs=1): err= 0: pid=2829066: Wed May 15 10:14:00 2024 00:20:14.728 read: IOPS=332, BW=1331KiB/s (1363kB/s)(1332KiB/1001msec) 00:20:14.728 slat (nsec): min=6892, max=61148, avg=24756.30, stdev=3909.60 00:20:14.728 clat (usec): min=934, max=3209, avg=1422.25, stdev=152.82 00:20:14.728 lat (usec): min=964, max=3235, avg=1447.01, stdev=152.95 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1303], 20.00th=[ 1352], 00:20:14.728 | 30.00th=[ 1385], 40.00th=[ 1401], 50.00th=[ 1418], 60.00th=[ 1434], 00:20:14.728 | 70.00th=[ 1450], 80.00th=[ 1483], 90.00th=[ 1516], 95.00th=[ 1549], 00:20:14.728 | 99.00th=[ 1827], 99.50th=[ 2573], 99.90th=[ 3195], 99.95th=[ 3195], 00:20:14.728 | 99.99th=[ 3195] 00:20:14.728 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:20:14.728 slat (nsec): min=9529, max=48489, avg=30445.16, stdev=2177.91 00:20:14.728 clat (usec): min=714, max=1253, avg=968.22, stdev=88.66 00:20:14.728 lat (usec): min=725, max=1284, avg=998.67, stdev=88.76 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 840], 20.00th=[ 906], 00:20:14.728 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 988], 00:20:14.728 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1106], 00:20:14.728 | 99.00th=[ 1139], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:20:14.728 | 99.99th=[ 1254] 00:20:14.728 bw ( KiB/s): min= 3976, max= 3976, per=50.33%, avg=3976.00, stdev= 0.00, samples=1 00:20:14.728 iops : min= 994, max= 994, avg=994.00, stdev= 0.00, samples=1 00:20:14.728 lat (usec) : 750=0.12%, 1000=39.64% 00:20:14.728 lat (msec) : 2=60.00%, 4=0.24% 00:20:14.728 cpu : usr=1.50%, sys=2.30%, ctx=845, majf=0, minf=1 00:20:14.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 issued rwts: total=333,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:14.728 job3: (groupid=0, jobs=1): err= 0: pid=2829074: Wed May 15 10:14:00 2024 00:20:14.728 read: IOPS=11, BW=46.3KiB/s (47.4kB/s)(48.0KiB/1037msec) 00:20:14.728 slat (nsec): min=24857, max=26721, avg=25450.92, stdev=450.56 00:20:14.728 clat (usec): min=41896, max=42567, avg=42089.14, stdev=224.06 00:20:14.728 lat (usec): min=41921, max=42593, avg=42114.60, stdev=224.28 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:14.728 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:20:14.728 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:20:14.728 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:14.728 | 99.99th=[42730] 00:20:14.728 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:20:14.728 slat (nsec): min=10039, max=52369, avg=32927.99, stdev=2930.35 00:20:14.728 clat (usec): min=644, max=1295, avg=996.55, stdev=73.72 00:20:14.728 lat (usec): min=655, max=1327, avg=1029.48, stdev=74.36 00:20:14.728 clat percentiles (usec): 00:20:14.728 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 955], 00:20:14.728 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:20:14.728 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1123], 00:20:14.728 | 99.00th=[ 1237], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:20:14.728 | 99.99th=[ 1303] 00:20:14.728 bw ( KiB/s): min= 240, max= 3856, per=25.93%, avg=2048.00, stdev=2556.90, samples=2 00:20:14.728 iops : min= 60, max= 964, avg=512.00, stdev=639.22, samples=2 00:20:14.728 lat (usec) : 750=0.57%, 1000=54.58% 00:20:14.728 lat (msec) : 2=42.56%, 50=2.29% 00:20:14.728 cpu : usr=0.29%, sys=2.12%, ctx=527, majf=0, minf=1 00:20:14.728 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.728 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.728 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:14.728 00:20:14.728 Run status group 0 (all jobs): 00:20:14.728 READ: bw=2272KiB/s (2326kB/s), 46.3KiB/s-1331KiB/s (47.4kB/s-1363kB/s), io=2356KiB (2413kB), run=1001-1037msec 00:20:14.728 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2046KiB/s (2022kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1037msec 00:20:14.728 00:20:14.728 Disk stats (read/write): 00:20:14.728 nvme0n1: ios=186/512, merge=0/0, ticks=341/551, in_queue=892, util=90.38% 00:20:14.728 nvme0n2: ios=59/512, merge=0/0, ticks=655/440, in_queue=1095, util=97.45% 00:20:14.728 nvme0n3: ios=251/512, merge=0/0, ticks=621/510, in_queue=1131, util=93.15% 00:20:14.728 nvme0n4: ios=29/512, merge=0/0, ticks=1219/521, in_queue=1740, util=97.33% 00:20:14.728 10:14:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:14.728 [global] 00:20:14.728 thread=1 00:20:14.728 invalidate=1 00:20:14.728 rw=write 00:20:14.728 time_based=1 00:20:14.728 runtime=1 00:20:14.728 ioengine=libaio 00:20:14.728 direct=1 00:20:14.728 bs=4096 00:20:14.728 iodepth=128 00:20:14.728 norandommap=0 00:20:14.728 numjobs=1 00:20:14.728 00:20:14.728 verify_dump=1 00:20:14.728 verify_backlog=512 00:20:14.728 verify_state_save=0 00:20:14.728 do_verify=1 00:20:14.728 verify=crc32c-intel 00:20:14.728 [job0] 00:20:14.728 filename=/dev/nvme0n1 00:20:14.728 [job1] 00:20:14.728 filename=/dev/nvme0n2 00:20:14.728 [job2] 00:20:14.728 filename=/dev/nvme0n3 00:20:14.728 [job3] 00:20:14.728 filename=/dev/nvme0n4 00:20:14.728 Could not set queue depth (nvme0n1) 00:20:14.728 Could not set queue depth (nvme0n2) 00:20:14.728 Could not set queue depth (nvme0n3) 00:20:14.728 Could not set queue depth (nvme0n4) 00:20:14.989 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.989 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.989 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.989 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:14.989 fio-3.35 00:20:14.989 Starting 4 threads 00:20:16.375 00:20:16.375 job0: (groupid=0, jobs=1): err= 0: pid=2829539: Wed May 15 10:14:01 2024 00:20:16.375 read: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:20:16.375 slat (nsec): min=896, max=6624.3k, avg=63615.65, stdev=378236.32 00:20:16.375 clat (usec): min=3371, max=23834, avg=8299.92, stdev=2465.66 00:20:16.376 lat (usec): min=3373, max=24251, avg=8363.54, stdev=2488.94 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 5211], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 6849], 00:20:16.376 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:20:16.376 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[13698], 00:20:16.376 | 99.00th=[19530], 99.50th=[20579], 99.90th=[20579], 99.95th=[20579], 00:20:16.376 | 99.99th=[23725] 00:20:16.376 write: IOPS=7342, BW=28.7MiB/s (30.1MB/s)(28.8MiB/1003msec); 0 zone resets 00:20:16.376 slat (nsec): min=1568, max=8146.0k, avg=70136.76, stdev=359398.80 00:20:16.376 clat (usec): min=1621, max=21867, avg=9173.88, stdev=3316.58 00:20:16.376 lat (usec): min=1958, max=21870, avg=9244.02, stdev=3336.91 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 3523], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6652], 00:20:16.376 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 8225], 60.00th=[ 9241], 00:20:16.376 | 70.00th=[10159], 80.00th=[11863], 90.00th=[13960], 95.00th=[15401], 00:20:16.376 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21890], 99.95th=[21890], 00:20:16.376 | 99.99th=[21890] 00:20:16.376 bw ( KiB/s): min=28672, max=29232, per=32.37%, avg=28952.00, stdev=395.98, samples=2 00:20:16.376 iops : min= 7168, max= 7308, avg=7238.00, stdev=98.99, samples=2 00:20:16.376 lat (msec) : 2=0.06%, 4=0.64%, 10=76.60%, 20=21.70%, 50=1.00% 00:20:16.376 cpu : usr=3.29%, sys=6.29%, ctx=894, majf=0, minf=1 00:20:16.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:16.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.376 issued rwts: total=7168,7365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.376 job1: (groupid=0, jobs=1): err= 0: pid=2829555: Wed May 15 10:14:01 2024 00:20:16.376 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:20:16.376 slat (nsec): min=862, max=23320k, avg=135752.37, stdev=961009.64 00:20:16.376 clat (usec): min=2601, max=53701, avg=18046.21, stdev=8067.19 00:20:16.376 lat (usec): min=2607, max=53725, avg=18181.96, stdev=8132.91 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 3982], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11338], 00:20:16.376 | 30.00th=[12518], 40.00th=[14222], 50.00th=[15926], 60.00th=[18220], 00:20:16.376 | 70.00th=[20579], 80.00th=[23725], 90.00th=[29230], 95.00th=[33424], 00:20:16.376 | 99.00th=[43779], 99.50th=[44303], 99.90th=[45351], 99.95th=[51643], 00:20:16.376 | 99.99th=[53740] 00:20:16.376 write: IOPS=3673, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1005msec); 0 zone resets 00:20:16.376 slat (nsec): min=1520, max=18479k, avg=128408.00, stdev=860962.01 00:20:16.376 clat (usec): min=722, max=48952, avg=16829.78, stdev=8191.97 00:20:16.376 lat (usec): min=1635, max=50101, avg=16958.18, stdev=8234.63 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 3064], 5.00th=[ 5014], 10.00th=[ 7373], 20.00th=[10159], 00:20:16.376 | 30.00th=[13042], 40.00th=[14484], 50.00th=[15926], 60.00th=[17695], 00:20:16.376 | 70.00th=[19268], 80.00th=[21103], 90.00th=[28181], 95.00th=[34341], 00:20:16.376 | 99.00th=[39584], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:20:16.376 | 99.99th=[49021] 00:20:16.376 bw ( KiB/s): min=12912, max=15768, per=16.04%, avg=14340.00, stdev=2019.50, samples=2 00:20:16.376 iops : min= 3228, max= 3942, avg=3585.00, stdev=504.87, samples=2 00:20:16.376 lat (usec) : 750=0.01% 00:20:16.376 lat (msec) : 2=0.04%, 4=1.90%, 10=11.67%, 20=56.91%, 50=29.43% 00:20:16.376 lat (msec) : 100=0.04% 00:20:16.376 cpu : usr=2.59%, sys=4.08%, ctx=332, majf=0, minf=1 00:20:16.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:16.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.376 issued rwts: total=3584,3692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.376 job2: (groupid=0, jobs=1): err= 0: pid=2829563: Wed May 15 10:14:01 2024 00:20:16.376 read: IOPS=6493, BW=25.4MiB/s (26.6MB/s)(26.0MiB/1025msec) 00:20:16.376 slat (nsec): min=928, max=14529k, avg=76723.93, stdev=564082.71 00:20:16.376 clat (usec): min=3131, max=41225, avg=10295.39, stdev=3415.13 00:20:16.376 lat (usec): min=3139, max=41228, avg=10372.11, stdev=3439.43 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7504], 00:20:16.376 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[10290], 00:20:16.376 | 70.00th=[11863], 80.00th=[12911], 90.00th=[15139], 95.00th=[17695], 00:20:16.376 | 99.00th=[19530], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:20:16.376 | 99.99th=[41157] 00:20:16.376 write: IOPS=6631, BW=25.9MiB/s (27.2MB/s)(26.6MiB/1025msec); 0 zone resets 00:20:16.376 slat (nsec): min=1605, max=7025.1k, avg=66214.19, stdev=388033.70 00:20:16.376 clat (usec): min=2313, max=34382, avg=9052.77, stdev=4583.17 00:20:16.376 lat (usec): min=2320, max=34543, avg=9118.98, stdev=4593.59 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 3326], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6587], 00:20:16.376 | 30.00th=[ 6980], 40.00th=[ 7373], 50.00th=[ 7701], 60.00th=[ 8160], 00:20:16.376 | 70.00th=[ 9110], 80.00th=[11076], 90.00th=[13829], 95.00th=[17695], 00:20:16.376 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:20:16.376 | 99.99th=[34341] 00:20:16.376 bw ( KiB/s): min=22456, max=30912, per=29.84%, avg=26684.00, stdev=5979.29, samples=2 00:20:16.376 iops : min= 5614, max= 7728, avg=6671.00, stdev=1494.82, samples=2 00:20:16.376 lat (msec) : 4=1.21%, 10=64.58%, 20=32.31%, 50=1.90% 00:20:16.376 cpu : usr=4.10%, sys=5.95%, ctx=663, majf=0, minf=1 00:20:16.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:16.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.376 issued rwts: total=6656,6797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.376 job3: (groupid=0, jobs=1): err= 0: pid=2829570: Wed May 15 10:14:01 2024 00:20:16.376 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:20:16.376 slat (nsec): min=891, max=12625k, avg=86980.85, stdev=674043.54 00:20:16.376 clat (usec): min=5166, max=53475, avg=13039.16, stdev=4932.16 00:20:16.376 lat (usec): min=5170, max=53486, avg=13126.14, stdev=4978.02 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:20:16.376 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11994], 60.00th=[12780], 00:20:16.376 | 70.00th=[13698], 80.00th=[14484], 90.00th=[18220], 95.00th=[22676], 00:20:16.376 | 99.00th=[30540], 99.50th=[33817], 99.90th=[52691], 99.95th=[53216], 00:20:16.376 | 99.99th=[53216] 00:20:16.376 write: IOPS=5036, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1005msec); 0 zone resets 00:20:16.376 slat (nsec): min=1553, max=9863.5k, avg=102820.22, stdev=568583.50 00:20:16.376 clat (usec): min=1487, max=33960, avg=13327.76, stdev=5158.44 00:20:16.376 lat (usec): min=1496, max=33964, avg=13430.58, stdev=5179.17 00:20:16.376 clat percentiles (usec): 00:20:16.376 | 1.00th=[ 4817], 5.00th=[ 6587], 10.00th=[ 7504], 20.00th=[ 8979], 00:20:16.376 | 30.00th=[10028], 40.00th=[11863], 50.00th=[12649], 60.00th=[13566], 00:20:16.376 | 70.00th=[14746], 80.00th=[16909], 90.00th=[20317], 95.00th=[22938], 00:20:16.376 | 99.00th=[30016], 99.50th=[31327], 99.90th=[32637], 99.95th=[32637], 00:20:16.376 | 99.99th=[33817] 00:20:16.376 bw ( KiB/s): min=19176, max=20304, per=22.07%, avg=19740.00, stdev=797.62, samples=2 00:20:16.376 iops : min= 4794, max= 5076, avg=4935.00, stdev=199.40, samples=2 00:20:16.376 lat (msec) : 2=0.03%, 4=0.22%, 10=26.28%, 20=64.71%, 50=8.57% 00:20:16.376 lat (msec) : 100=0.20% 00:20:16.376 cpu : usr=4.28%, sys=4.38%, ctx=466, majf=0, minf=1 00:20:16.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:16.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:16.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:16.376 issued rwts: total=4608,5062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:16.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:16.376 00:20:16.376 Run status group 0 (all jobs): 00:20:16.376 READ: bw=83.9MiB/s (88.0MB/s), 13.9MiB/s-27.9MiB/s (14.6MB/s-29.3MB/s), io=86.0MiB (90.2MB), run=1003-1025msec 00:20:16.376 WRITE: bw=87.3MiB/s (91.6MB/s), 14.3MiB/s-28.7MiB/s (15.0MB/s-30.1MB/s), io=89.5MiB (93.9MB), run=1003-1025msec 00:20:16.376 00:20:16.376 Disk stats (read/write): 00:20:16.376 nvme0n1: ios=6105/6144, merge=0/0, ticks=29542/32467, in_queue=62009, util=98.30% 00:20:16.376 nvme0n2: ios=2715/3072, merge=0/0, ticks=36506/34799, in_queue=71305, util=92.66% 00:20:16.376 nvme0n3: ios=5659/5774, merge=0/0, ticks=54916/47731, in_queue=102647, util=89.77% 00:20:16.376 nvme0n4: ios=4042/4096, merge=0/0, ticks=44572/48872, in_queue=93444, util=89.43% 00:20:16.376 10:14:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:16.376 [global] 00:20:16.376 thread=1 00:20:16.376 invalidate=1 00:20:16.376 rw=randwrite 00:20:16.376 time_based=1 00:20:16.376 runtime=1 00:20:16.376 ioengine=libaio 00:20:16.376 direct=1 00:20:16.376 bs=4096 00:20:16.376 iodepth=128 00:20:16.376 norandommap=0 00:20:16.376 numjobs=1 00:20:16.376 00:20:16.376 verify_dump=1 00:20:16.376 verify_backlog=512 00:20:16.376 verify_state_save=0 00:20:16.376 do_verify=1 00:20:16.376 verify=crc32c-intel 00:20:16.376 [job0] 00:20:16.376 filename=/dev/nvme0n1 00:20:16.376 [job1] 00:20:16.376 filename=/dev/nvme0n2 00:20:16.376 [job2] 00:20:16.376 filename=/dev/nvme0n3 00:20:16.376 [job3] 00:20:16.376 filename=/dev/nvme0n4 00:20:16.376 Could not set queue depth (nvme0n1) 00:20:16.376 Could not set queue depth (nvme0n2) 00:20:16.376 Could not set queue depth (nvme0n3) 00:20:16.376 Could not set queue depth (nvme0n4) 00:20:16.636 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:16.636 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:16.636 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:16.636 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:16.636 fio-3.35 00:20:16.636 Starting 4 threads 00:20:18.022 00:20:18.022 job0: (groupid=0, jobs=1): err= 0: pid=2830012: Wed May 15 10:14:03 2024 00:20:18.022 read: IOPS=5677, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1006msec) 00:20:18.022 slat (nsec): min=978, max=14536k, avg=78343.05, stdev=527988.38 00:20:18.022 clat (usec): min=3557, max=32352, avg=10109.97, stdev=3658.42 00:20:18.022 lat (usec): min=4277, max=32354, avg=10188.31, stdev=3682.22 00:20:18.022 clat percentiles (usec): 00:20:18.022 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7177], 00:20:18.022 | 30.00th=[ 7898], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[10159], 00:20:18.022 | 70.00th=[10945], 80.00th=[12518], 90.00th=[14484], 95.00th=[16057], 00:20:18.022 | 99.00th=[25035], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:20:18.022 | 99.99th=[32375] 00:20:18.022 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:20:18.022 slat (nsec): min=1577, max=5957.2k, avg=83465.91, stdev=386242.32 00:20:18.022 clat (usec): min=2506, max=33163, avg=11230.10, stdev=4896.87 00:20:18.022 lat (usec): min=2516, max=33165, avg=11313.57, stdev=4916.09 00:20:18.022 clat percentiles (usec): 00:20:18.022 | 1.00th=[ 4293], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7177], 00:20:18.022 | 30.00th=[ 7898], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11338], 00:20:18.022 | 70.00th=[12387], 80.00th=[14222], 90.00th=[18220], 95.00th=[22152], 00:20:18.022 | 99.00th=[26084], 99.50th=[26870], 99.90th=[27395], 99.95th=[27395], 00:20:18.022 | 99.99th=[33162] 00:20:18.022 bw ( KiB/s): min=20480, max=28288, per=27.17%, avg=24384.00, stdev=5521.09, samples=2 00:20:18.022 iops : min= 5120, max= 7072, avg=6096.00, stdev=1380.27, samples=2 00:20:18.022 lat (msec) : 4=0.42%, 10=53.04%, 20=41.25%, 50=5.28% 00:20:18.022 cpu : usr=2.69%, sys=6.47%, ctx=747, majf=0, minf=1 00:20:18.022 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.022 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.022 issued rwts: total=5712,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.022 job1: (groupid=0, jobs=1): err= 0: pid=2830028: Wed May 15 10:14:03 2024 00:20:18.022 read: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.2MiB/1014msec) 00:20:18.022 slat (nsec): min=901, max=53704k, avg=140792.26, stdev=1472917.98 00:20:18.022 clat (msec): min=4, max=133, avg=18.96, stdev=18.69 00:20:18.022 lat (msec): min=4, max=133, avg=19.10, stdev=18.80 00:20:18.022 clat percentiles (msec): 00:20:18.022 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:20:18.022 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 18], 00:20:18.022 | 70.00th=[ 21], 80.00th=[ 24], 90.00th=[ 28], 95.00th=[ 41], 00:20:18.022 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 124], 00:20:18.022 | 99.99th=[ 134] 00:20:18.022 write: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec); 0 zone resets 00:20:18.022 slat (nsec): min=1528, max=11757k, avg=97227.31, stdev=598603.59 00:20:18.022 clat (usec): min=2090, max=39165, avg=14568.30, stdev=6629.38 00:20:18.022 lat (usec): min=3407, max=39172, avg=14665.52, stdev=6659.37 00:20:18.022 clat percentiles (usec): 00:20:18.022 | 1.00th=[ 3752], 5.00th=[ 5014], 10.00th=[ 6652], 20.00th=[ 8848], 00:20:18.022 | 30.00th=[10814], 40.00th=[12649], 50.00th=[14091], 60.00th=[15008], 00:20:18.022 | 70.00th=[16712], 80.00th=[19530], 90.00th=[23987], 95.00th=[25822], 00:20:18.022 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:20:18.022 | 99.99th=[39060] 00:20:18.023 bw ( KiB/s): min=12288, max=19824, per=17.89%, avg=16056.00, stdev=5328.76, samples=2 00:20:18.023 iops : min= 3072, max= 4956, avg=4014.00, stdev=1332.19, samples=2 00:20:18.023 lat (msec) : 4=0.89%, 10=26.51%, 20=47.05%, 50=23.36%, 100=1.09% 00:20:18.023 lat (msec) : 250=1.10% 00:20:18.023 cpu : usr=2.57%, sys=3.46%, ctx=480, majf=0, minf=1 00:20:18.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.023 issued rwts: total=3630,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.023 job2: (groupid=0, jobs=1): err= 0: pid=2830045: Wed May 15 10:14:03 2024 00:20:18.023 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:20:18.023 slat (nsec): min=925, max=11965k, avg=79191.95, stdev=496198.88 00:20:18.023 clat (usec): min=2877, max=32376, avg=10397.31, stdev=4223.55 00:20:18.023 lat (usec): min=3013, max=32388, avg=10476.50, stdev=4263.60 00:20:18.023 clat percentiles (usec): 00:20:18.023 | 1.00th=[ 4080], 5.00th=[ 6063], 10.00th=[ 6980], 20.00th=[ 7832], 00:20:18.023 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9765], 00:20:18.023 | 70.00th=[10814], 80.00th=[11994], 90.00th=[15926], 95.00th=[20317], 00:20:18.023 | 99.00th=[25035], 99.50th=[28967], 99.90th=[29230], 99.95th=[30540], 00:20:18.023 | 99.99th=[32375] 00:20:18.023 write: IOPS=5816, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1007msec); 0 zone resets 00:20:18.023 slat (nsec): min=1533, max=9670.0k, avg=89898.39, stdev=501929.79 00:20:18.023 clat (usec): min=1859, max=29688, avg=11678.00, stdev=5033.21 00:20:18.023 lat (usec): min=1864, max=29698, avg=11767.90, stdev=5060.34 00:20:18.023 clat percentiles (usec): 00:20:18.023 | 1.00th=[ 4424], 5.00th=[ 5604], 10.00th=[ 6587], 20.00th=[ 7832], 00:20:18.023 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[11076], 00:20:18.023 | 70.00th=[12780], 80.00th=[15795], 90.00th=[19792], 95.00th=[21890], 00:20:18.023 | 99.00th=[25822], 99.50th=[27395], 99.90th=[29492], 99.95th=[29754], 00:20:18.023 | 99.99th=[29754] 00:20:18.023 bw ( KiB/s): min=22688, max=23152, per=25.54%, avg=22920.00, stdev=328.10, samples=2 00:20:18.023 iops : min= 5672, max= 5788, avg=5730.00, stdev=82.02, samples=2 00:20:18.023 lat (msec) : 2=0.14%, 4=0.68%, 10=55.19%, 20=36.56%, 50=7.43% 00:20:18.023 cpu : usr=2.58%, sys=5.27%, ctx=618, majf=0, minf=1 00:20:18.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.023 issued rwts: total=5632,5857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.023 job3: (groupid=0, jobs=1): err= 0: pid=2830052: Wed May 15 10:14:03 2024 00:20:18.023 read: IOPS=6233, BW=24.4MiB/s (25.5MB/s)(24.5MiB/1005msec) 00:20:18.023 slat (nsec): min=930, max=12289k, avg=76605.29, stdev=521916.35 00:20:18.023 clat (usec): min=1739, max=38683, avg=9812.40, stdev=5088.00 00:20:18.023 lat (usec): min=3817, max=38692, avg=9889.01, stdev=5128.02 00:20:18.023 clat percentiles (usec): 00:20:18.023 | 1.00th=[ 4686], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6521], 00:20:18.023 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 8848], 00:20:18.023 | 70.00th=[ 9765], 80.00th=[11338], 90.00th=[15139], 95.00th=[21890], 00:20:18.023 | 99.00th=[29754], 99.50th=[31589], 99.90th=[33817], 99.95th=[38536], 00:20:18.023 | 99.99th=[38536] 00:20:18.023 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:20:18.023 slat (nsec): min=1521, max=9250.8k, avg=74616.09, stdev=373040.79 00:20:18.023 clat (usec): min=1641, max=34475, avg=9919.99, stdev=5097.03 00:20:18.023 lat (usec): min=2252, max=34477, avg=9994.61, stdev=5126.29 00:20:18.023 clat percentiles (usec): 00:20:18.023 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 5276], 20.00th=[ 6194], 00:20:18.023 | 30.00th=[ 6980], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9503], 00:20:18.023 | 70.00th=[11076], 80.00th=[13435], 90.00th=[16057], 95.00th=[18482], 00:20:18.023 | 99.00th=[31851], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:20:18.023 | 99.99th=[34341] 00:20:18.023 bw ( KiB/s): min=20424, max=32768, per=29.63%, avg=26596.00, stdev=8728.53, samples=2 00:20:18.023 iops : min= 5106, max= 8192, avg=6649.00, stdev=2182.13, samples=2 00:20:18.023 lat (msec) : 2=0.03%, 4=2.15%, 10=65.52%, 20=27.46%, 50=4.84% 00:20:18.023 cpu : usr=2.69%, sys=6.08%, ctx=683, majf=0, minf=1 00:20:18.023 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:18.023 issued rwts: total=6265,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:18.023 00:20:18.023 Run status group 0 (all jobs): 00:20:18.023 READ: bw=81.8MiB/s (85.8MB/s), 14.0MiB/s-24.4MiB/s (14.7MB/s-25.5MB/s), io=83.0MiB (87.0MB), run=1005-1014msec 00:20:18.023 WRITE: bw=87.7MiB/s (91.9MB/s), 15.8MiB/s-25.9MiB/s (16.5MB/s-27.1MB/s), io=88.9MiB (93.2MB), run=1005-1014msec 00:20:18.023 00:20:18.023 Disk stats (read/write): 00:20:18.023 nvme0n1: ios=4658/4783, merge=0/0, ticks=47665/53987, in_queue=101652, util=97.60% 00:20:18.023 nvme0n2: ios=3096/3111, merge=0/0, ticks=35601/29183, in_queue=64784, util=98.37% 00:20:18.023 nvme0n3: ios=4664/4765, merge=0/0, ticks=26162/28873, in_queue=55035, util=97.47% 00:20:18.023 nvme0n4: ios=5632/5699, merge=0/0, ticks=45448/49590, in_queue=95038, util=89.55% 00:20:18.023 10:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:20:18.023 10:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2830244 00:20:18.023 10:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:20:18.023 10:14:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:18.023 [global] 00:20:18.023 thread=1 00:20:18.023 invalidate=1 00:20:18.023 rw=read 00:20:18.023 time_based=1 00:20:18.023 runtime=10 00:20:18.023 ioengine=libaio 00:20:18.023 direct=1 00:20:18.023 bs=4096 00:20:18.023 iodepth=1 00:20:18.023 norandommap=1 00:20:18.023 numjobs=1 00:20:18.023 00:20:18.023 [job0] 00:20:18.023 filename=/dev/nvme0n1 00:20:18.023 [job1] 00:20:18.023 filename=/dev/nvme0n2 00:20:18.023 [job2] 00:20:18.023 filename=/dev/nvme0n3 00:20:18.023 [job3] 00:20:18.023 filename=/dev/nvme0n4 00:20:18.023 Could not set queue depth (nvme0n1) 00:20:18.023 Could not set queue depth (nvme0n2) 00:20:18.023 Could not set queue depth (nvme0n3) 00:20:18.023 Could not set queue depth (nvme0n4) 00:20:18.592 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.592 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.592 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.592 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:18.592 fio-3.35 00:20:18.592 Starting 4 threads 00:20:21.144 10:14:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:21.144 10:14:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:21.144 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=249856, buflen=4096 00:20:21.144 fio: pid=2830535, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:21.406 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:21.406 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:21.406 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=270336, buflen=4096 00:20:21.406 fio: pid=2830527, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:21.406 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:21.406 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:21.667 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=282624, buflen=4096 00:20:21.667 fio: pid=2830489, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:21.667 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7168000, buflen=4096 00:20:21.667 fio: pid=2830504, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:21.667 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:21.667 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:21.667 00:20:21.667 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2830489: Wed May 15 10:14:07 2024 00:20:21.667 read: IOPS=23, BW=93.3KiB/s (95.5kB/s)(276KiB/2959msec) 00:20:21.667 slat (usec): min=24, max=14294, avg=273.05, stdev=1738.14 00:20:21.667 clat (usec): min=2655, max=43079, avg=42288.47, stdev=4849.99 00:20:21.667 lat (usec): min=2688, max=56945, avg=42565.11, stdev=5172.91 00:20:21.667 clat percentiles (usec): 00:20:21.667 | 1.00th=[ 2671], 5.00th=[42206], 10.00th=[42206], 20.00th=[42730], 00:20:21.667 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:20:21.667 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:20:21.667 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:21.667 | 99.99th=[43254] 00:20:21.667 bw ( KiB/s): min= 88, max= 96, per=3.75%, avg=94.40, stdev= 3.58, samples=5 00:20:21.667 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:20:21.667 lat (msec) : 4=1.43%, 50=97.14% 00:20:21.667 cpu : usr=0.00%, sys=0.10%, ctx=76, majf=0, minf=1 00:20:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.667 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.667 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.667 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2830504: Wed May 15 10:14:07 2024 00:20:21.667 read: IOPS=564, BW=2256KiB/s (2310kB/s)(7000KiB/3103msec) 00:20:21.667 slat (usec): min=23, max=26258, avg=77.22, stdev=995.85 00:20:21.667 clat (usec): min=892, max=10334, avg=1676.38, stdev=249.73 00:20:21.667 lat (usec): min=916, max=27875, avg=1753.63, stdev=1027.99 00:20:21.667 clat percentiles (usec): 00:20:21.667 | 1.00th=[ 1287], 5.00th=[ 1549], 10.00th=[ 1598], 20.00th=[ 1647], 00:20:21.667 | 30.00th=[ 1663], 40.00th=[ 1663], 50.00th=[ 1680], 60.00th=[ 1696], 00:20:21.667 | 70.00th=[ 1713], 80.00th=[ 1713], 90.00th=[ 1729], 95.00th=[ 1762], 00:20:21.667 | 99.00th=[ 1811], 99.50th=[ 1958], 99.90th=[ 6194], 99.95th=[10290], 00:20:21.667 | 99.99th=[10290] 00:20:21.667 bw ( KiB/s): min= 1952, max= 2352, per=90.77%, avg=2277.33, stdev=159.65, samples=6 00:20:21.667 iops : min= 488, max= 588, avg=569.33, stdev=39.91, samples=6 00:20:21.667 lat (usec) : 1000=0.23% 00:20:21.667 lat (msec) : 2=99.31%, 4=0.29%, 10=0.06%, 20=0.06% 00:20:21.667 cpu : usr=0.81%, sys=1.42%, ctx=1760, majf=0, minf=1 00:20:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.667 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.667 issued rwts: total=1751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.667 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2830527: Wed May 15 10:14:07 2024 00:20:21.667 read: IOPS=24, BW=95.0KiB/s (97.3kB/s)(264KiB/2778msec) 00:20:21.667 slat (usec): min=25, max=3402, avg=76.34, stdev=412.55 00:20:21.667 clat (usec): min=2316, max=43059, avg=41671.71, stdev=4939.80 00:20:21.667 lat (usec): min=2351, max=46074, avg=41748.81, stdev=4966.56 00:20:21.667 clat percentiles (usec): 00:20:21.667 | 1.00th=[ 2311], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:20:21.667 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:21.667 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:20:21.667 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:21.667 | 99.99th=[43254] 00:20:21.667 bw ( KiB/s): min= 96, max= 96, per=3.83%, avg=96.00, stdev= 0.00, samples=5 00:20:21.667 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:20:21.667 lat (msec) : 4=1.49%, 50=97.01% 00:20:21.667 cpu : usr=0.00%, sys=0.14%, ctx=69, majf=0, minf=1 00:20:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.667 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.667 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.667 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2830535: Wed May 15 10:14:07 2024 00:20:21.667 read: IOPS=23, BW=93.7KiB/s (95.9kB/s)(244KiB/2605msec) 00:20:21.667 slat (nsec): min=24999, max=45812, avg=26204.81, stdev=3002.23 00:20:21.668 clat (usec): min=2471, max=43123, avg=42305.85, stdev=5185.55 00:20:21.668 lat (usec): min=2509, max=43148, avg=42332.06, stdev=5184.00 00:20:21.668 clat percentiles (usec): 00:20:21.668 | 1.00th=[ 2474], 5.00th=[42730], 10.00th=[42730], 20.00th=[42730], 00:20:21.668 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[42730], 00:20:21.668 | 70.00th=[43254], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:20:21.668 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:21.668 | 99.99th=[43254] 00:20:21.668 bw ( KiB/s): min= 88, max= 96, per=3.75%, avg=94.40, stdev= 3.58, samples=5 00:20:21.668 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:20:21.668 lat (msec) : 4=1.61%, 50=96.77% 00:20:21.668 cpu : usr=0.00%, sys=0.12%, ctx=65, majf=0, minf=2 00:20:21.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:21.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.668 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:21.668 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:21.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:21.668 00:20:21.668 Run status group 0 (all jobs): 00:20:21.668 READ: bw=2509KiB/s (2569kB/s), 93.3KiB/s-2256KiB/s (95.5kB/s-2310kB/s), io=7784KiB (7971kB), run=2605-3103msec 00:20:21.668 00:20:21.668 Disk stats (read/write): 00:20:21.668 nvme0n1: ios=98/0, merge=0/0, ticks=3099/0, in_queue=3099, util=98.66% 00:20:21.668 nvme0n2: ios=1750/0, merge=0/0, ticks=2887/0, in_queue=2887, util=92.91% 00:20:21.668 nvme0n3: ios=62/0, merge=0/0, ticks=2583/0, in_queue=2583, util=96.03% 00:20:21.668 nvme0n4: ios=94/0, merge=0/0, ticks=3493/0, in_queue=3493, util=99.29% 00:20:21.929 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:21.929 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:22.190 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.190 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:22.190 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.190 10:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:22.451 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:22.451 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:22.713 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:22.713 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2830244 00:20:22.713 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:22.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:22.714 nvmf hotplug test: fio failed as expected 00:20:22.714 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.976 rmmod nvme_tcp 00:20:22.976 rmmod nvme_fabrics 00:20:22.976 rmmod nvme_keyring 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2826741 ']' 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2826741 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 2826741 ']' 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 2826741 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2826741 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2826741' 00:20:22.976 killing process with pid 2826741 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 2826741 00:20:22.976 [2024-05-15 10:14:08.705213] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:22.976 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 2826741 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.238 10:14:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.157 10:14:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.157 00:20:25.157 real 0m28.634s 00:20:25.157 user 2m37.548s 00:20:25.157 sys 0m8.763s 00:20:25.157 10:14:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:25.157 10:14:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.157 ************************************ 00:20:25.157 END TEST nvmf_fio_target 00:20:25.157 ************************************ 00:20:25.157 10:14:10 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:25.157 10:14:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:25.157 10:14:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:25.157 10:14:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:25.420 ************************************ 00:20:25.420 START TEST nvmf_bdevio 00:20:25.420 ************************************ 00:20:25.420 10:14:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:25.420 * Looking for test storage... 00:20:25.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.420 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.421 10:14:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:33.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:33.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:33.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:33.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:20:33.575 00:20:33.575 --- 10.0.0.2 ping statistics --- 00:20:33.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.575 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:20:33.575 00:20:33.575 --- 10.0.0.1 ping statistics --- 00:20:33.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.575 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2835691 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2835691 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 2835691 ']' 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:33.575 10:14:18 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.575 [2024-05-15 10:14:18.543848] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:20:33.575 [2024-05-15 10:14:18.543921] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.575 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.575 [2024-05-15 10:14:18.633532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.575 [2024-05-15 10:14:18.681091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.575 [2024-05-15 10:14:18.681144] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.575 [2024-05-15 10:14:18.681153] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.575 [2024-05-15 10:14:18.681160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.575 [2024-05-15 10:14:18.681166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.575 [2024-05-15 10:14:18.681364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:33.575 [2024-05-15 10:14:18.681566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:33.575 [2024-05-15 10:14:18.681718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.575 [2024-05-15 10:14:18.681720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:33.575 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:33.575 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:20:33.575 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:33.575 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:33.575 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 [2024-05-15 10:14:19.397559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 Malloc0 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:33.836 [2024-05-15 10:14:19.462653] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:33.836 [2024-05-15 10:14:19.462987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:33.836 { 00:20:33.836 "params": { 00:20:33.836 "name": "Nvme$subsystem", 00:20:33.836 "trtype": "$TEST_TRANSPORT", 00:20:33.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.836 "adrfam": "ipv4", 00:20:33.836 "trsvcid": "$NVMF_PORT", 00:20:33.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.836 "hdgst": ${hdgst:-false}, 00:20:33.836 "ddgst": ${ddgst:-false} 00:20:33.836 }, 00:20:33.836 "method": "bdev_nvme_attach_controller" 00:20:33.836 } 00:20:33.836 EOF 00:20:33.836 )") 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:33.836 10:14:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:33.836 "params": { 00:20:33.836 "name": "Nvme1", 00:20:33.836 "trtype": "tcp", 00:20:33.836 "traddr": "10.0.0.2", 00:20:33.836 "adrfam": "ipv4", 00:20:33.836 "trsvcid": "4420", 00:20:33.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.836 "hdgst": false, 00:20:33.836 "ddgst": false 00:20:33.836 }, 00:20:33.836 "method": "bdev_nvme_attach_controller" 00:20:33.836 }' 00:20:33.836 [2024-05-15 10:14:19.523436] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:20:33.836 [2024-05-15 10:14:19.523526] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835808 ] 00:20:33.836 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.836 [2024-05-15 10:14:19.589484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:33.836 [2024-05-15 10:14:19.630329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.836 [2024-05-15 10:14:19.630396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.836 [2024-05-15 10:14:19.630400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.408 I/O targets: 00:20:34.408 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:34.408 00:20:34.408 00:20:34.408 CUnit - A unit testing framework for C - Version 2.1-3 00:20:34.408 http://cunit.sourceforge.net/ 00:20:34.408 00:20:34.408 00:20:34.408 Suite: bdevio tests on: Nvme1n1 00:20:34.408 Test: blockdev write read block ...passed 00:20:34.408 Test: blockdev write zeroes read block ...passed 00:20:34.408 Test: blockdev write zeroes read no split ...passed 00:20:34.408 Test: blockdev write zeroes read split ...passed 00:20:34.408 Test: blockdev write zeroes read split partial ...passed 00:20:34.408 Test: blockdev reset ...[2024-05-15 10:14:20.151931] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:34.408 [2024-05-15 10:14:20.152008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2028410 (9): Bad file descriptor 00:20:34.408 [2024-05-15 10:14:20.173759] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:34.408 passed 00:20:34.408 Test: blockdev write read 8 blocks ...passed 00:20:34.408 Test: blockdev write read size > 128k ...passed 00:20:34.408 Test: blockdev write read invalid size ...passed 00:20:34.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:34.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:34.680 Test: blockdev write read max offset ...passed 00:20:34.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:34.680 Test: blockdev writev readv 8 blocks ...passed 00:20:34.680 Test: blockdev writev readv 30 x 1block ...passed 00:20:34.680 Test: blockdev writev readv block ...passed 00:20:34.680 Test: blockdev writev readv size > 128k ...passed 00:20:34.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:34.680 Test: blockdev comparev and writev ...[2024-05-15 10:14:20.416098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.416125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.416136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.416142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.416828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.416837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.416847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.416857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.417539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.417549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.417558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.417564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.418243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:34.680 [2024-05-15 10:14:20.418253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:34.680 [2024-05-15 10:14:20.418258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:34.680 passed 00:20:35.000 Test: blockdev nvme passthru rw ...passed 00:20:35.000 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:14:20.504613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:35.000 [2024-05-15 10:14:20.504627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:35.000 [2024-05-15 10:14:20.505193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:35.000 [2024-05-15 10:14:20.505201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:35.000 [2024-05-15 10:14:20.505756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:35.000 [2024-05-15 10:14:20.505765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:35.000 [2024-05-15 10:14:20.506337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:35.000 [2024-05-15 10:14:20.506345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:35.000 passed 00:20:35.000 Test: blockdev nvme admin passthru ...passed 00:20:35.000 Test: blockdev copy ...passed 00:20:35.000 00:20:35.000 Run Summary: Type Total Ran Passed Failed Inactive 00:20:35.000 suites 1 1 n/a 0 0 00:20:35.000 tests 23 23 23 0 0 00:20:35.000 asserts 152 152 152 0 n/a 00:20:35.000 00:20:35.000 Elapsed time = 1.348 seconds 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.001 rmmod nvme_tcp 00:20:35.001 rmmod nvme_fabrics 00:20:35.001 rmmod nvme_keyring 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2835691 ']' 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2835691 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 2835691 ']' 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 2835691 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:20:35.001 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2835691 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2835691' 00:20:35.262 killing process with pid 2835691 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 2835691 00:20:35.262 [2024-05-15 10:14:20.812224] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 2835691 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.262 10:14:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.812 10:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.812 00:20:37.812 real 0m12.019s 00:20:37.812 user 0m13.542s 00:20:37.812 sys 0m6.004s 00:20:37.812 10:14:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:20:37.812 10:14:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:37.812 ************************************ 00:20:37.812 END TEST nvmf_bdevio 00:20:37.812 ************************************ 00:20:37.812 10:14:23 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:37.812 10:14:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:20:37.812 10:14:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:20:37.812 10:14:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:37.812 ************************************ 00:20:37.812 START TEST nvmf_auth_target 00:20:37.812 ************************************ 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:37.812 * Looking for test storage... 00:20:37.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.812 10:14:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:44.413 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:44.413 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:44.413 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:44.413 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.413 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:20:44.675 00:20:44.675 --- 10.0.0.2 ping statistics --- 00:20:44.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.675 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:20:44.675 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.631 ms 00:20:44.936 00:20:44.936 --- 10.0.0.1 ping statistics --- 00:20:44.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.936 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2840159 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2840159 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2840159 ']' 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:44.936 10:14:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=2840476 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69415bb84b384516789c434f98db6da5640c7e532098dc7f 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.vZV 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69415bb84b384516789c434f98db6da5640c7e532098dc7f 0 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69415bb84b384516789c434f98db6da5640c7e532098dc7f 0 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69415bb84b384516789c434f98db6da5640c7e532098dc7f 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.vZV 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.vZV 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.vZV 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:45.879 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fd4ee809e56da171301b37c502285a66 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.E7x 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fd4ee809e56da171301b37c502285a66 1 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fd4ee809e56da171301b37c502285a66 1 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fd4ee809e56da171301b37c502285a66 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.E7x 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.E7x 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.E7x 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7fc33f1298725022b8e8b459a05b995605aa69e6ccf26310 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2ye 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7fc33f1298725022b8e8b459a05b995605aa69e6ccf26310 2 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7fc33f1298725022b8e8b459a05b995605aa69e6ccf26310 2 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7fc33f1298725022b8e8b459a05b995605aa69e6ccf26310 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2ye 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2ye 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.2ye 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=842d40c605e29151b7c0839db12e47a71903f5e595aa71a5e0b706b158f0542a 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MQA 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 842d40c605e29151b7c0839db12e47a71903f5e595aa71a5e0b706b158f0542a 3 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 842d40c605e29151b7c0839db12e47a71903f5e595aa71a5e0b706b158f0542a 3 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=842d40c605e29151b7c0839db12e47a71903f5e595aa71a5e0b706b158f0542a 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MQA 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MQA 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.MQA 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 2840159 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2840159 ']' 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:45.880 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 2840476 /var/tmp/host.sock 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2840476 ']' 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:46.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:20:46.141 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.401 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:20:46.401 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:20:46.401 10:14:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:20:46.401 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.401 10:14:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vZV 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vZV 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vZV 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.E7x 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.E7x 00:20:46.401 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.E7x 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.2ye 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.2ye 00:20:46.662 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.2ye 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.MQA 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.MQA 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.MQA 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:46.923 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:47.185 10:14:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:47.446 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:47.446 { 00:20:47.446 "cntlid": 1, 00:20:47.446 "qid": 0, 00:20:47.446 "state": "enabled", 00:20:47.446 "listen_address": { 00:20:47.446 "trtype": "TCP", 00:20:47.446 "adrfam": "IPv4", 00:20:47.446 "traddr": "10.0.0.2", 00:20:47.446 "trsvcid": "4420" 00:20:47.446 }, 00:20:47.446 "peer_address": { 00:20:47.446 "trtype": "TCP", 00:20:47.446 "adrfam": "IPv4", 00:20:47.446 "traddr": "10.0.0.1", 00:20:47.446 "trsvcid": "42568" 00:20:47.446 }, 00:20:47.446 "auth": { 00:20:47.446 "state": "completed", 00:20:47.446 "digest": "sha256", 00:20:47.446 "dhgroup": "null" 00:20:47.446 } 00:20:47.446 } 00:20:47.446 ]' 00:20:47.446 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.707 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.967 10:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:48.537 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.798 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:49.058 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.058 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:49.058 { 00:20:49.058 "cntlid": 3, 00:20:49.058 "qid": 0, 00:20:49.058 "state": "enabled", 00:20:49.058 "listen_address": { 00:20:49.058 "trtype": "TCP", 00:20:49.058 "adrfam": "IPv4", 00:20:49.058 "traddr": "10.0.0.2", 00:20:49.058 "trsvcid": "4420" 00:20:49.058 }, 00:20:49.058 "peer_address": { 00:20:49.058 "trtype": "TCP", 00:20:49.058 "adrfam": "IPv4", 00:20:49.058 "traddr": "10.0.0.1", 00:20:49.058 "trsvcid": "42586" 00:20:49.058 }, 00:20:49.058 "auth": { 00:20:49.058 "state": "completed", 00:20:49.058 "digest": "sha256", 00:20:49.058 "dhgroup": "null" 00:20:49.058 } 00:20:49.059 } 00:20:49.059 ]' 00:20:49.059 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:49.319 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.319 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:49.319 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:49.320 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:49.320 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.320 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.320 10:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.588 10:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:50.162 10:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.423 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.684 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.684 10:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:50.945 { 00:20:50.945 "cntlid": 5, 00:20:50.945 "qid": 0, 00:20:50.945 "state": "enabled", 00:20:50.945 "listen_address": { 00:20:50.945 "trtype": "TCP", 00:20:50.945 "adrfam": "IPv4", 00:20:50.945 "traddr": "10.0.0.2", 00:20:50.945 "trsvcid": "4420" 00:20:50.945 }, 00:20:50.945 "peer_address": { 00:20:50.945 "trtype": "TCP", 00:20:50.945 "adrfam": "IPv4", 00:20:50.945 "traddr": "10.0.0.1", 00:20:50.945 "trsvcid": "42624" 00:20:50.945 }, 00:20:50.945 "auth": { 00:20:50.945 "state": "completed", 00:20:50.945 "digest": "sha256", 00:20:50.945 "dhgroup": "null" 00:20:50.945 } 00:20:50.945 } 00:20:50.945 ]' 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.945 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.211 10:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:51.782 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.043 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.303 00:20:52.303 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:52.303 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.303 10:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:52.303 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.303 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.303 10:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.303 10:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:52.562 { 00:20:52.562 "cntlid": 7, 00:20:52.562 "qid": 0, 00:20:52.562 "state": "enabled", 00:20:52.562 "listen_address": { 00:20:52.562 "trtype": "TCP", 00:20:52.562 "adrfam": "IPv4", 00:20:52.562 "traddr": "10.0.0.2", 00:20:52.562 "trsvcid": "4420" 00:20:52.562 }, 00:20:52.562 "peer_address": { 00:20:52.562 "trtype": "TCP", 00:20:52.562 "adrfam": "IPv4", 00:20:52.562 "traddr": "10.0.0.1", 00:20:52.562 "trsvcid": "42654" 00:20:52.562 }, 00:20:52.562 "auth": { 00:20:52.562 "state": "completed", 00:20:52.562 "digest": "sha256", 00:20:52.562 "dhgroup": "null" 00:20:52.562 } 00:20:52.562 } 00:20:52.562 ]' 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.562 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.822 10:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:53.405 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:53.706 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:53.968 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.968 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:54.230 { 00:20:54.230 "cntlid": 9, 00:20:54.230 "qid": 0, 00:20:54.230 "state": "enabled", 00:20:54.230 "listen_address": { 00:20:54.230 "trtype": "TCP", 00:20:54.230 "adrfam": "IPv4", 00:20:54.230 "traddr": "10.0.0.2", 00:20:54.230 "trsvcid": "4420" 00:20:54.230 }, 00:20:54.230 "peer_address": { 00:20:54.230 "trtype": "TCP", 00:20:54.230 "adrfam": "IPv4", 00:20:54.230 "traddr": "10.0.0.1", 00:20:54.230 "trsvcid": "42688" 00:20:54.230 }, 00:20:54.230 "auth": { 00:20:54.230 "state": "completed", 00:20:54.230 "digest": "sha256", 00:20:54.230 "dhgroup": "ffdhe2048" 00:20:54.230 } 00:20:54.230 } 00:20:54.230 ]' 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.230 10:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.492 10:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:20:55.066 10:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.066 10:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:55.067 10:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.067 10:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.067 10:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.067 10:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:55.067 10:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:55.067 10:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:55.328 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:55.590 00:20:55.590 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:55.590 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:55.590 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:55.852 { 00:20:55.852 "cntlid": 11, 00:20:55.852 "qid": 0, 00:20:55.852 "state": "enabled", 00:20:55.852 "listen_address": { 00:20:55.852 "trtype": "TCP", 00:20:55.852 "adrfam": "IPv4", 00:20:55.852 "traddr": "10.0.0.2", 00:20:55.852 "trsvcid": "4420" 00:20:55.852 }, 00:20:55.852 "peer_address": { 00:20:55.852 "trtype": "TCP", 00:20:55.852 "adrfam": "IPv4", 00:20:55.852 "traddr": "10.0.0.1", 00:20:55.852 "trsvcid": "52806" 00:20:55.852 }, 00:20:55.852 "auth": { 00:20:55.852 "state": "completed", 00:20:55.852 "digest": "sha256", 00:20:55.852 "dhgroup": "ffdhe2048" 00:20:55.852 } 00:20:55.852 } 00:20:55.852 ]' 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.852 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.114 10:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:56.688 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.948 10:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.949 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:56.949 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:57.209 00:20:57.209 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.209 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:57.209 10:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:57.470 { 00:20:57.470 "cntlid": 13, 00:20:57.470 "qid": 0, 00:20:57.470 "state": "enabled", 00:20:57.470 "listen_address": { 00:20:57.470 "trtype": "TCP", 00:20:57.470 "adrfam": "IPv4", 00:20:57.470 "traddr": "10.0.0.2", 00:20:57.470 "trsvcid": "4420" 00:20:57.470 }, 00:20:57.470 "peer_address": { 00:20:57.470 "trtype": "TCP", 00:20:57.470 "adrfam": "IPv4", 00:20:57.470 "traddr": "10.0.0.1", 00:20:57.470 "trsvcid": "52850" 00:20:57.470 }, 00:20:57.470 "auth": { 00:20:57.470 "state": "completed", 00:20:57.470 "digest": "sha256", 00:20:57.470 "dhgroup": "ffdhe2048" 00:20:57.470 } 00:20:57.470 } 00:20:57.470 ]' 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:57.470 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.471 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:57.471 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.471 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.471 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.732 10:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.675 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.936 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:58.936 { 00:20:58.936 "cntlid": 15, 00:20:58.936 "qid": 0, 00:20:58.936 "state": "enabled", 00:20:58.936 "listen_address": { 00:20:58.936 "trtype": "TCP", 00:20:58.936 "adrfam": "IPv4", 00:20:58.936 "traddr": "10.0.0.2", 00:20:58.936 "trsvcid": "4420" 00:20:58.936 }, 00:20:58.936 "peer_address": { 00:20:58.936 "trtype": "TCP", 00:20:58.936 "adrfam": "IPv4", 00:20:58.936 "traddr": "10.0.0.1", 00:20:58.936 "trsvcid": "52874" 00:20:58.936 }, 00:20:58.936 "auth": { 00:20:58.936 "state": "completed", 00:20:58.936 "digest": "sha256", 00:20:58.936 "dhgroup": "ffdhe2048" 00:20:58.936 } 00:20:58.936 } 00:20:58.936 ]' 00:20:58.936 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.197 10:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.458 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:00.032 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:00.294 10:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:00.555 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.555 10:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:00.816 { 00:21:00.816 "cntlid": 17, 00:21:00.816 "qid": 0, 00:21:00.816 "state": "enabled", 00:21:00.816 "listen_address": { 00:21:00.816 "trtype": "TCP", 00:21:00.816 "adrfam": "IPv4", 00:21:00.816 "traddr": "10.0.0.2", 00:21:00.816 "trsvcid": "4420" 00:21:00.816 }, 00:21:00.816 "peer_address": { 00:21:00.816 "trtype": "TCP", 00:21:00.816 "adrfam": "IPv4", 00:21:00.816 "traddr": "10.0.0.1", 00:21:00.816 "trsvcid": "52910" 00:21:00.816 }, 00:21:00.816 "auth": { 00:21:00.816 "state": "completed", 00:21:00.816 "digest": "sha256", 00:21:00.816 "dhgroup": "ffdhe3072" 00:21:00.816 } 00:21:00.816 } 00:21:00.816 ]' 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.816 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.078 10:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:01.651 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:01.912 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:02.174 00:21:02.174 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.174 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.174 10:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.436 { 00:21:02.436 "cntlid": 19, 00:21:02.436 "qid": 0, 00:21:02.436 "state": "enabled", 00:21:02.436 "listen_address": { 00:21:02.436 "trtype": "TCP", 00:21:02.436 "adrfam": "IPv4", 00:21:02.436 "traddr": "10.0.0.2", 00:21:02.436 "trsvcid": "4420" 00:21:02.436 }, 00:21:02.436 "peer_address": { 00:21:02.436 "trtype": "TCP", 00:21:02.436 "adrfam": "IPv4", 00:21:02.436 "traddr": "10.0.0.1", 00:21:02.436 "trsvcid": "52948" 00:21:02.436 }, 00:21:02.436 "auth": { 00:21:02.436 "state": "completed", 00:21:02.436 "digest": "sha256", 00:21:02.436 "dhgroup": "ffdhe3072" 00:21:02.436 } 00:21:02.436 } 00:21:02.436 ]' 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.436 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.698 10:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:03.271 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:03.533 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:03.794 00:21:03.794 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:03.794 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:03.794 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.056 { 00:21:04.056 "cntlid": 21, 00:21:04.056 "qid": 0, 00:21:04.056 "state": "enabled", 00:21:04.056 "listen_address": { 00:21:04.056 "trtype": "TCP", 00:21:04.056 "adrfam": "IPv4", 00:21:04.056 "traddr": "10.0.0.2", 00:21:04.056 "trsvcid": "4420" 00:21:04.056 }, 00:21:04.056 "peer_address": { 00:21:04.056 "trtype": "TCP", 00:21:04.056 "adrfam": "IPv4", 00:21:04.056 "traddr": "10.0.0.1", 00:21:04.056 "trsvcid": "52964" 00:21:04.056 }, 00:21:04.056 "auth": { 00:21:04.056 "state": "completed", 00:21:04.056 "digest": "sha256", 00:21:04.056 "dhgroup": "ffdhe3072" 00:21:04.056 } 00:21:04.056 } 00:21:04.056 ]' 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.056 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.318 10:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:05.263 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.264 10:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.525 00:21:05.525 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:05.525 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.525 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:05.787 { 00:21:05.787 "cntlid": 23, 00:21:05.787 "qid": 0, 00:21:05.787 "state": "enabled", 00:21:05.787 "listen_address": { 00:21:05.787 "trtype": "TCP", 00:21:05.787 "adrfam": "IPv4", 00:21:05.787 "traddr": "10.0.0.2", 00:21:05.787 "trsvcid": "4420" 00:21:05.787 }, 00:21:05.787 "peer_address": { 00:21:05.787 "trtype": "TCP", 00:21:05.787 "adrfam": "IPv4", 00:21:05.787 "traddr": "10.0.0.1", 00:21:05.787 "trsvcid": "41758" 00:21:05.787 }, 00:21:05.787 "auth": { 00:21:05.787 "state": "completed", 00:21:05.787 "digest": "sha256", 00:21:05.787 "dhgroup": "ffdhe3072" 00:21:05.787 } 00:21:05.787 } 00:21:05.787 ]' 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.787 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.050 10:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:06.622 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.622 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.622 10:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.622 10:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:06.884 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:07.145 00:21:07.145 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:07.145 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:07.145 10:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:07.407 { 00:21:07.407 "cntlid": 25, 00:21:07.407 "qid": 0, 00:21:07.407 "state": "enabled", 00:21:07.407 "listen_address": { 00:21:07.407 "trtype": "TCP", 00:21:07.407 "adrfam": "IPv4", 00:21:07.407 "traddr": "10.0.0.2", 00:21:07.407 "trsvcid": "4420" 00:21:07.407 }, 00:21:07.407 "peer_address": { 00:21:07.407 "trtype": "TCP", 00:21:07.407 "adrfam": "IPv4", 00:21:07.407 "traddr": "10.0.0.1", 00:21:07.407 "trsvcid": "41782" 00:21:07.407 }, 00:21:07.407 "auth": { 00:21:07.407 "state": "completed", 00:21:07.407 "digest": "sha256", 00:21:07.407 "dhgroup": "ffdhe4096" 00:21:07.407 } 00:21:07.407 } 00:21:07.407 ]' 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.407 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.672 10:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:08.651 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:08.912 00:21:08.912 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:08.912 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:08.912 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:09.173 { 00:21:09.173 "cntlid": 27, 00:21:09.173 "qid": 0, 00:21:09.173 "state": "enabled", 00:21:09.173 "listen_address": { 00:21:09.173 "trtype": "TCP", 00:21:09.173 "adrfam": "IPv4", 00:21:09.173 "traddr": "10.0.0.2", 00:21:09.173 "trsvcid": "4420" 00:21:09.173 }, 00:21:09.173 "peer_address": { 00:21:09.173 "trtype": "TCP", 00:21:09.173 "adrfam": "IPv4", 00:21:09.173 "traddr": "10.0.0.1", 00:21:09.173 "trsvcid": "41810" 00:21:09.173 }, 00:21:09.173 "auth": { 00:21:09.173 "state": "completed", 00:21:09.173 "digest": "sha256", 00:21:09.173 "dhgroup": "ffdhe4096" 00:21:09.173 } 00:21:09.173 } 00:21:09.173 ]' 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.173 10:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.443 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:10.017 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.017 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:10.017 10:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.017 10:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.277 10:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.277 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:10.277 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:10.277 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:10.277 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:21:10.277 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:10.278 10:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:10.538 00:21:10.538 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:10.538 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:10.538 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:10.800 { 00:21:10.800 "cntlid": 29, 00:21:10.800 "qid": 0, 00:21:10.800 "state": "enabled", 00:21:10.800 "listen_address": { 00:21:10.800 "trtype": "TCP", 00:21:10.800 "adrfam": "IPv4", 00:21:10.800 "traddr": "10.0.0.2", 00:21:10.800 "trsvcid": "4420" 00:21:10.800 }, 00:21:10.800 "peer_address": { 00:21:10.800 "trtype": "TCP", 00:21:10.800 "adrfam": "IPv4", 00:21:10.800 "traddr": "10.0.0.1", 00:21:10.800 "trsvcid": "41834" 00:21:10.800 }, 00:21:10.800 "auth": { 00:21:10.800 "state": "completed", 00:21:10.800 "digest": "sha256", 00:21:10.800 "dhgroup": "ffdhe4096" 00:21:10.800 } 00:21:10.800 } 00:21:10.800 ]' 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.800 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.061 10:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.006 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.267 00:21:12.267 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:12.267 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:12.267 10:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:12.528 { 00:21:12.528 "cntlid": 31, 00:21:12.528 "qid": 0, 00:21:12.528 "state": "enabled", 00:21:12.528 "listen_address": { 00:21:12.528 "trtype": "TCP", 00:21:12.528 "adrfam": "IPv4", 00:21:12.528 "traddr": "10.0.0.2", 00:21:12.528 "trsvcid": "4420" 00:21:12.528 }, 00:21:12.528 "peer_address": { 00:21:12.528 "trtype": "TCP", 00:21:12.528 "adrfam": "IPv4", 00:21:12.528 "traddr": "10.0.0.1", 00:21:12.528 "trsvcid": "41864" 00:21:12.528 }, 00:21:12.528 "auth": { 00:21:12.528 "state": "completed", 00:21:12.528 "digest": "sha256", 00:21:12.528 "dhgroup": "ffdhe4096" 00:21:12.528 } 00:21:12.528 } 00:21:12.528 ]' 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.528 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.789 10:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:13.361 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.622 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.883 00:21:13.883 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:13.883 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.883 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:14.144 { 00:21:14.144 "cntlid": 33, 00:21:14.144 "qid": 0, 00:21:14.144 "state": "enabled", 00:21:14.144 "listen_address": { 00:21:14.144 "trtype": "TCP", 00:21:14.144 "adrfam": "IPv4", 00:21:14.144 "traddr": "10.0.0.2", 00:21:14.144 "trsvcid": "4420" 00:21:14.144 }, 00:21:14.144 "peer_address": { 00:21:14.144 "trtype": "TCP", 00:21:14.144 "adrfam": "IPv4", 00:21:14.144 "traddr": "10.0.0.1", 00:21:14.144 "trsvcid": "41888" 00:21:14.144 }, 00:21:14.144 "auth": { 00:21:14.144 "state": "completed", 00:21:14.144 "digest": "sha256", 00:21:14.144 "dhgroup": "ffdhe6144" 00:21:14.144 } 00:21:14.144 } 00:21:14.144 ]' 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:14.144 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.405 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:14.405 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.405 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.405 10:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.405 10:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:15.349 10:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:15.350 10:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:15.350 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:15.923 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:15.923 { 00:21:15.923 "cntlid": 35, 00:21:15.923 "qid": 0, 00:21:15.923 "state": "enabled", 00:21:15.923 "listen_address": { 00:21:15.923 "trtype": "TCP", 00:21:15.923 "adrfam": "IPv4", 00:21:15.923 "traddr": "10.0.0.2", 00:21:15.923 "trsvcid": "4420" 00:21:15.923 }, 00:21:15.923 "peer_address": { 00:21:15.923 "trtype": "TCP", 00:21:15.923 "adrfam": "IPv4", 00:21:15.923 "traddr": "10.0.0.1", 00:21:15.923 "trsvcid": "52152" 00:21:15.923 }, 00:21:15.923 "auth": { 00:21:15.923 "state": "completed", 00:21:15.923 "digest": "sha256", 00:21:15.923 "dhgroup": "ffdhe6144" 00:21:15.923 } 00:21:15.923 } 00:21:15.923 ]' 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.923 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:16.185 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.185 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:16.185 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.185 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.185 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.185 10:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:17.127 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:17.128 10:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:17.697 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:17.697 { 00:21:17.697 "cntlid": 37, 00:21:17.697 "qid": 0, 00:21:17.697 "state": "enabled", 00:21:17.697 "listen_address": { 00:21:17.697 "trtype": "TCP", 00:21:17.697 "adrfam": "IPv4", 00:21:17.697 "traddr": "10.0.0.2", 00:21:17.697 "trsvcid": "4420" 00:21:17.697 }, 00:21:17.697 "peer_address": { 00:21:17.697 "trtype": "TCP", 00:21:17.697 "adrfam": "IPv4", 00:21:17.697 "traddr": "10.0.0.1", 00:21:17.697 "trsvcid": "52190" 00:21:17.697 }, 00:21:17.697 "auth": { 00:21:17.697 "state": "completed", 00:21:17.697 "digest": "sha256", 00:21:17.697 "dhgroup": "ffdhe6144" 00:21:17.697 } 00:21:17.697 } 00:21:17.697 ]' 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.697 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:17.957 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:17.957 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:17.957 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.957 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.957 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.958 10:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:18.898 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.899 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.470 00:21:19.470 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:19.470 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:19.470 10:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:19.470 { 00:21:19.470 "cntlid": 39, 00:21:19.470 "qid": 0, 00:21:19.470 "state": "enabled", 00:21:19.470 "listen_address": { 00:21:19.470 "trtype": "TCP", 00:21:19.470 "adrfam": "IPv4", 00:21:19.470 "traddr": "10.0.0.2", 00:21:19.470 "trsvcid": "4420" 00:21:19.470 }, 00:21:19.470 "peer_address": { 00:21:19.470 "trtype": "TCP", 00:21:19.470 "adrfam": "IPv4", 00:21:19.470 "traddr": "10.0.0.1", 00:21:19.470 "trsvcid": "52208" 00:21:19.470 }, 00:21:19.470 "auth": { 00:21:19.470 "state": "completed", 00:21:19.470 "digest": "sha256", 00:21:19.470 "dhgroup": "ffdhe6144" 00:21:19.470 } 00:21:19.470 } 00:21:19.470 ]' 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.470 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:19.732 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.732 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.732 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.732 10:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.677 10:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.678 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:20.678 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:21.250 00:21:21.250 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:21.250 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:21.250 10:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:21.512 { 00:21:21.512 "cntlid": 41, 00:21:21.512 "qid": 0, 00:21:21.512 "state": "enabled", 00:21:21.512 "listen_address": { 00:21:21.512 "trtype": "TCP", 00:21:21.512 "adrfam": "IPv4", 00:21:21.512 "traddr": "10.0.0.2", 00:21:21.512 "trsvcid": "4420" 00:21:21.512 }, 00:21:21.512 "peer_address": { 00:21:21.512 "trtype": "TCP", 00:21:21.512 "adrfam": "IPv4", 00:21:21.512 "traddr": "10.0.0.1", 00:21:21.512 "trsvcid": "52246" 00:21:21.512 }, 00:21:21.512 "auth": { 00:21:21.512 "state": "completed", 00:21:21.512 "digest": "sha256", 00:21:21.512 "dhgroup": "ffdhe8192" 00:21:21.512 } 00:21:21.512 } 00:21:21.512 ]' 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.512 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.773 10:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:22.346 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:22.609 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:23.246 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.246 10:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.246 10:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.246 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:23.246 { 00:21:23.246 "cntlid": 43, 00:21:23.246 "qid": 0, 00:21:23.246 "state": "enabled", 00:21:23.246 "listen_address": { 00:21:23.246 "trtype": "TCP", 00:21:23.246 "adrfam": "IPv4", 00:21:23.246 "traddr": "10.0.0.2", 00:21:23.246 "trsvcid": "4420" 00:21:23.246 }, 00:21:23.246 "peer_address": { 00:21:23.246 "trtype": "TCP", 00:21:23.246 "adrfam": "IPv4", 00:21:23.247 "traddr": "10.0.0.1", 00:21:23.247 "trsvcid": "52270" 00:21:23.247 }, 00:21:23.247 "auth": { 00:21:23.247 "state": "completed", 00:21:23.247 "digest": "sha256", 00:21:23.247 "dhgroup": "ffdhe8192" 00:21:23.247 } 00:21:23.247 } 00:21:23.247 ]' 00:21:23.247 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.508 10:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:24.453 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:25.028 00:21:25.028 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:25.028 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:25.028 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.291 { 00:21:25.291 "cntlid": 45, 00:21:25.291 "qid": 0, 00:21:25.291 "state": "enabled", 00:21:25.291 "listen_address": { 00:21:25.291 "trtype": "TCP", 00:21:25.291 "adrfam": "IPv4", 00:21:25.291 "traddr": "10.0.0.2", 00:21:25.291 "trsvcid": "4420" 00:21:25.291 }, 00:21:25.291 "peer_address": { 00:21:25.291 "trtype": "TCP", 00:21:25.291 "adrfam": "IPv4", 00:21:25.291 "traddr": "10.0.0.1", 00:21:25.291 "trsvcid": "52298" 00:21:25.291 }, 00:21:25.291 "auth": { 00:21:25.291 "state": "completed", 00:21:25.291 "digest": "sha256", 00:21:25.291 "dhgroup": "ffdhe8192" 00:21:25.291 } 00:21:25.291 } 00:21:25.291 ]' 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.291 10:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.291 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.291 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.291 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.291 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.291 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.553 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:26.523 10:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.523 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.101 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:27.101 { 00:21:27.101 "cntlid": 47, 00:21:27.101 "qid": 0, 00:21:27.101 "state": "enabled", 00:21:27.101 "listen_address": { 00:21:27.101 "trtype": "TCP", 00:21:27.101 "adrfam": "IPv4", 00:21:27.101 "traddr": "10.0.0.2", 00:21:27.101 "trsvcid": "4420" 00:21:27.101 }, 00:21:27.101 "peer_address": { 00:21:27.101 "trtype": "TCP", 00:21:27.101 "adrfam": "IPv4", 00:21:27.101 "traddr": "10.0.0.1", 00:21:27.101 "trsvcid": "52604" 00:21:27.101 }, 00:21:27.101 "auth": { 00:21:27.101 "state": "completed", 00:21:27.101 "digest": "sha256", 00:21:27.101 "dhgroup": "ffdhe8192" 00:21:27.101 } 00:21:27.101 } 00:21:27.101 ]' 00:21:27.101 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.362 10:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.622 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:28.195 10:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:28.458 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:21:28.458 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:28.458 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.458 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.458 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.459 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:28.459 10:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.459 10:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.459 10:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.459 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:28.459 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:28.720 00:21:28.720 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:28.720 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:28.720 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.720 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.720 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.721 10:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.721 10:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.721 10:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.721 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:28.721 { 00:21:28.721 "cntlid": 49, 00:21:28.721 "qid": 0, 00:21:28.721 "state": "enabled", 00:21:28.721 "listen_address": { 00:21:28.721 "trtype": "TCP", 00:21:28.721 "adrfam": "IPv4", 00:21:28.721 "traddr": "10.0.0.2", 00:21:28.721 "trsvcid": "4420" 00:21:28.721 }, 00:21:28.721 "peer_address": { 00:21:28.721 "trtype": "TCP", 00:21:28.721 "adrfam": "IPv4", 00:21:28.721 "traddr": "10.0.0.1", 00:21:28.721 "trsvcid": "52628" 00:21:28.721 }, 00:21:28.721 "auth": { 00:21:28.721 "state": "completed", 00:21:28.721 "digest": "sha384", 00:21:28.721 "dhgroup": "null" 00:21:28.721 } 00:21:28.721 } 00:21:28.721 ]' 00:21:28.721 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.982 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.243 10:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:29.815 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:30.076 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:30.076 00:21:30.338 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:30.338 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:30.338 10:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.338 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.338 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.338 10:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.338 10:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.338 10:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.338 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:30.338 { 00:21:30.338 "cntlid": 51, 00:21:30.338 "qid": 0, 00:21:30.338 "state": "enabled", 00:21:30.339 "listen_address": { 00:21:30.339 "trtype": "TCP", 00:21:30.339 "adrfam": "IPv4", 00:21:30.339 "traddr": "10.0.0.2", 00:21:30.339 "trsvcid": "4420" 00:21:30.339 }, 00:21:30.339 "peer_address": { 00:21:30.339 "trtype": "TCP", 00:21:30.339 "adrfam": "IPv4", 00:21:30.339 "traddr": "10.0.0.1", 00:21:30.339 "trsvcid": "52668" 00:21:30.339 }, 00:21:30.339 "auth": { 00:21:30.339 "state": "completed", 00:21:30.339 "digest": "sha384", 00:21:30.339 "dhgroup": "null" 00:21:30.339 } 00:21:30.339 } 00:21:30.339 ]' 00:21:30.339 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:30.339 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.339 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:30.600 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:30.600 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:30.600 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.600 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.600 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.600 10:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.544 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:31.805 00:21:31.805 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:31.805 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:31.805 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:32.066 { 00:21:32.066 "cntlid": 53, 00:21:32.066 "qid": 0, 00:21:32.066 "state": "enabled", 00:21:32.066 "listen_address": { 00:21:32.066 "trtype": "TCP", 00:21:32.066 "adrfam": "IPv4", 00:21:32.066 "traddr": "10.0.0.2", 00:21:32.066 "trsvcid": "4420" 00:21:32.066 }, 00:21:32.066 "peer_address": { 00:21:32.066 "trtype": "TCP", 00:21:32.066 "adrfam": "IPv4", 00:21:32.066 "traddr": "10.0.0.1", 00:21:32.066 "trsvcid": "52684" 00:21:32.066 }, 00:21:32.066 "auth": { 00:21:32.066 "state": "completed", 00:21:32.066 "digest": "sha384", 00:21:32.066 "dhgroup": "null" 00:21:32.066 } 00:21:32.066 } 00:21:32.066 ]' 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.066 10:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.328 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.272 10:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.533 00:21:33.533 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:33.533 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:33.533 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:33.794 { 00:21:33.794 "cntlid": 55, 00:21:33.794 "qid": 0, 00:21:33.794 "state": "enabled", 00:21:33.794 "listen_address": { 00:21:33.794 "trtype": "TCP", 00:21:33.794 "adrfam": "IPv4", 00:21:33.794 "traddr": "10.0.0.2", 00:21:33.794 "trsvcid": "4420" 00:21:33.794 }, 00:21:33.794 "peer_address": { 00:21:33.794 "trtype": "TCP", 00:21:33.794 "adrfam": "IPv4", 00:21:33.794 "traddr": "10.0.0.1", 00:21:33.794 "trsvcid": "52722" 00:21:33.794 }, 00:21:33.794 "auth": { 00:21:33.794 "state": "completed", 00:21:33.794 "digest": "sha384", 00:21:33.794 "dhgroup": "null" 00:21:33.794 } 00:21:33.794 } 00:21:33.794 ]' 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.794 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.056 10:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:34.629 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:34.891 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.154 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.154 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:35.154 { 00:21:35.154 "cntlid": 57, 00:21:35.154 "qid": 0, 00:21:35.154 "state": "enabled", 00:21:35.154 "listen_address": { 00:21:35.154 "trtype": "TCP", 00:21:35.154 "adrfam": "IPv4", 00:21:35.154 "traddr": "10.0.0.2", 00:21:35.154 "trsvcid": "4420" 00:21:35.154 }, 00:21:35.154 "peer_address": { 00:21:35.154 "trtype": "TCP", 00:21:35.154 "adrfam": "IPv4", 00:21:35.154 "traddr": "10.0.0.1", 00:21:35.154 "trsvcid": "52730" 00:21:35.154 }, 00:21:35.154 "auth": { 00:21:35.154 "state": "completed", 00:21:35.154 "digest": "sha384", 00:21:35.154 "dhgroup": "ffdhe2048" 00:21:35.154 } 00:21:35.154 } 00:21:35.154 ]' 00:21:35.416 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:35.416 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.416 10:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:35.416 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:35.416 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:35.416 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.416 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.416 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.677 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:36.249 10:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:36.249 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:36.511 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:36.772 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:36.772 { 00:21:36.772 "cntlid": 59, 00:21:36.772 "qid": 0, 00:21:36.772 "state": "enabled", 00:21:36.772 "listen_address": { 00:21:36.772 "trtype": "TCP", 00:21:36.772 "adrfam": "IPv4", 00:21:36.772 "traddr": "10.0.0.2", 00:21:36.772 "trsvcid": "4420" 00:21:36.772 }, 00:21:36.772 "peer_address": { 00:21:36.772 "trtype": "TCP", 00:21:36.772 "adrfam": "IPv4", 00:21:36.772 "traddr": "10.0.0.1", 00:21:36.772 "trsvcid": "43606" 00:21:36.772 }, 00:21:36.772 "auth": { 00:21:36.772 "state": "completed", 00:21:36.772 "digest": "sha384", 00:21:36.772 "dhgroup": "ffdhe2048" 00:21:36.772 } 00:21:36.772 } 00:21:36.772 ]' 00:21:36.772 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.034 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.296 10:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:37.896 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:38.158 10:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:38.419 00:21:38.419 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:38.419 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:38.419 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.419 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:38.681 { 00:21:38.681 "cntlid": 61, 00:21:38.681 "qid": 0, 00:21:38.681 "state": "enabled", 00:21:38.681 "listen_address": { 00:21:38.681 "trtype": "TCP", 00:21:38.681 "adrfam": "IPv4", 00:21:38.681 "traddr": "10.0.0.2", 00:21:38.681 "trsvcid": "4420" 00:21:38.681 }, 00:21:38.681 "peer_address": { 00:21:38.681 "trtype": "TCP", 00:21:38.681 "adrfam": "IPv4", 00:21:38.681 "traddr": "10.0.0.1", 00:21:38.681 "trsvcid": "43632" 00:21:38.681 }, 00:21:38.681 "auth": { 00:21:38.681 "state": "completed", 00:21:38.681 "digest": "sha384", 00:21:38.681 "dhgroup": "ffdhe2048" 00:21:38.681 } 00:21:38.681 } 00:21:38.681 ]' 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.681 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.943 10:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:39.516 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:39.517 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.778 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.039 00:21:40.039 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:40.039 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.039 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:40.301 { 00:21:40.301 "cntlid": 63, 00:21:40.301 "qid": 0, 00:21:40.301 "state": "enabled", 00:21:40.301 "listen_address": { 00:21:40.301 "trtype": "TCP", 00:21:40.301 "adrfam": "IPv4", 00:21:40.301 "traddr": "10.0.0.2", 00:21:40.301 "trsvcid": "4420" 00:21:40.301 }, 00:21:40.301 "peer_address": { 00:21:40.301 "trtype": "TCP", 00:21:40.301 "adrfam": "IPv4", 00:21:40.301 "traddr": "10.0.0.1", 00:21:40.301 "trsvcid": "43658" 00:21:40.301 }, 00:21:40.301 "auth": { 00:21:40.301 "state": "completed", 00:21:40.301 "digest": "sha384", 00:21:40.301 "dhgroup": "ffdhe2048" 00:21:40.301 } 00:21:40.301 } 00:21:40.301 ]' 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.301 10:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.563 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:41.136 10:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.398 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.659 00:21:41.659 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:41.659 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.659 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:41.921 { 00:21:41.921 "cntlid": 65, 00:21:41.921 "qid": 0, 00:21:41.921 "state": "enabled", 00:21:41.921 "listen_address": { 00:21:41.921 "trtype": "TCP", 00:21:41.921 "adrfam": "IPv4", 00:21:41.921 "traddr": "10.0.0.2", 00:21:41.921 "trsvcid": "4420" 00:21:41.921 }, 00:21:41.921 "peer_address": { 00:21:41.921 "trtype": "TCP", 00:21:41.921 "adrfam": "IPv4", 00:21:41.921 "traddr": "10.0.0.1", 00:21:41.921 "trsvcid": "43684" 00:21:41.921 }, 00:21:41.921 "auth": { 00:21:41.921 "state": "completed", 00:21:41.921 "digest": "sha384", 00:21:41.921 "dhgroup": "ffdhe3072" 00:21:41.921 } 00:21:41.921 } 00:21:41.921 ]' 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.921 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.182 10:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:42.755 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.755 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.755 10:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.755 10:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:43.016 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:43.278 00:21:43.278 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:43.278 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:43.278 10:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:43.540 { 00:21:43.540 "cntlid": 67, 00:21:43.540 "qid": 0, 00:21:43.540 "state": "enabled", 00:21:43.540 "listen_address": { 00:21:43.540 "trtype": "TCP", 00:21:43.540 "adrfam": "IPv4", 00:21:43.540 "traddr": "10.0.0.2", 00:21:43.540 "trsvcid": "4420" 00:21:43.540 }, 00:21:43.540 "peer_address": { 00:21:43.540 "trtype": "TCP", 00:21:43.540 "adrfam": "IPv4", 00:21:43.540 "traddr": "10.0.0.1", 00:21:43.540 "trsvcid": "43712" 00:21:43.540 }, 00:21:43.540 "auth": { 00:21:43.540 "state": "completed", 00:21:43.540 "digest": "sha384", 00:21:43.540 "dhgroup": "ffdhe3072" 00:21:43.540 } 00:21:43.540 } 00:21:43.540 ]' 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.540 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.801 10:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.746 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:45.007 00:21:45.007 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:45.007 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:45.007 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:45.268 { 00:21:45.268 "cntlid": 69, 00:21:45.268 "qid": 0, 00:21:45.268 "state": "enabled", 00:21:45.268 "listen_address": { 00:21:45.268 "trtype": "TCP", 00:21:45.268 "adrfam": "IPv4", 00:21:45.268 "traddr": "10.0.0.2", 00:21:45.268 "trsvcid": "4420" 00:21:45.268 }, 00:21:45.268 "peer_address": { 00:21:45.268 "trtype": "TCP", 00:21:45.268 "adrfam": "IPv4", 00:21:45.268 "traddr": "10.0.0.1", 00:21:45.268 "trsvcid": "43744" 00:21:45.268 }, 00:21:45.268 "auth": { 00:21:45.268 "state": "completed", 00:21:45.268 "digest": "sha384", 00:21:45.268 "dhgroup": "ffdhe3072" 00:21:45.268 } 00:21:45.268 } 00:21:45.268 ]' 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.268 10:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.530 10:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:46.102 10:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.102 10:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.102 10:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.102 10:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.363 10:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.363 10:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:46.363 10:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:46.363 10:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.363 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.624 00:21:46.624 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:46.624 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:46.624 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:46.885 { 00:21:46.885 "cntlid": 71, 00:21:46.885 "qid": 0, 00:21:46.885 "state": "enabled", 00:21:46.885 "listen_address": { 00:21:46.885 "trtype": "TCP", 00:21:46.885 "adrfam": "IPv4", 00:21:46.885 "traddr": "10.0.0.2", 00:21:46.885 "trsvcid": "4420" 00:21:46.885 }, 00:21:46.885 "peer_address": { 00:21:46.885 "trtype": "TCP", 00:21:46.885 "adrfam": "IPv4", 00:21:46.885 "traddr": "10.0.0.1", 00:21:46.885 "trsvcid": "41820" 00:21:46.885 }, 00:21:46.885 "auth": { 00:21:46.885 "state": "completed", 00:21:46.885 "digest": "sha384", 00:21:46.885 "dhgroup": "ffdhe3072" 00:21:46.885 } 00:21:46.885 } 00:21:46.885 ]' 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.885 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.146 10:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:48.090 10:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:48.351 00:21:48.351 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:48.351 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:48.351 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:48.613 { 00:21:48.613 "cntlid": 73, 00:21:48.613 "qid": 0, 00:21:48.613 "state": "enabled", 00:21:48.613 "listen_address": { 00:21:48.613 "trtype": "TCP", 00:21:48.613 "adrfam": "IPv4", 00:21:48.613 "traddr": "10.0.0.2", 00:21:48.613 "trsvcid": "4420" 00:21:48.613 }, 00:21:48.613 "peer_address": { 00:21:48.613 "trtype": "TCP", 00:21:48.613 "adrfam": "IPv4", 00:21:48.613 "traddr": "10.0.0.1", 00:21:48.613 "trsvcid": "41836" 00:21:48.613 }, 00:21:48.613 "auth": { 00:21:48.613 "state": "completed", 00:21:48.613 "digest": "sha384", 00:21:48.613 "dhgroup": "ffdhe4096" 00:21:48.613 } 00:21:48.613 } 00:21:48.613 ]' 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.613 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.875 10:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:49.448 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:49.710 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:49.972 00:21:49.972 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:49.972 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:49.972 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:50.233 { 00:21:50.233 "cntlid": 75, 00:21:50.233 "qid": 0, 00:21:50.233 "state": "enabled", 00:21:50.233 "listen_address": { 00:21:50.233 "trtype": "TCP", 00:21:50.233 "adrfam": "IPv4", 00:21:50.233 "traddr": "10.0.0.2", 00:21:50.233 "trsvcid": "4420" 00:21:50.233 }, 00:21:50.233 "peer_address": { 00:21:50.233 "trtype": "TCP", 00:21:50.233 "adrfam": "IPv4", 00:21:50.233 "traddr": "10.0.0.1", 00:21:50.233 "trsvcid": "41868" 00:21:50.233 }, 00:21:50.233 "auth": { 00:21:50.233 "state": "completed", 00:21:50.233 "digest": "sha384", 00:21:50.233 "dhgroup": "ffdhe4096" 00:21:50.233 } 00:21:50.233 } 00:21:50.233 ]' 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.233 10:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.495 10:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:51.438 10:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:51.438 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:51.699 00:21:51.699 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:51.699 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:51.699 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:51.960 { 00:21:51.960 "cntlid": 77, 00:21:51.960 "qid": 0, 00:21:51.960 "state": "enabled", 00:21:51.960 "listen_address": { 00:21:51.960 "trtype": "TCP", 00:21:51.960 "adrfam": "IPv4", 00:21:51.960 "traddr": "10.0.0.2", 00:21:51.960 "trsvcid": "4420" 00:21:51.960 }, 00:21:51.960 "peer_address": { 00:21:51.960 "trtype": "TCP", 00:21:51.960 "adrfam": "IPv4", 00:21:51.960 "traddr": "10.0.0.1", 00:21:51.960 "trsvcid": "41908" 00:21:51.960 }, 00:21:51.960 "auth": { 00:21:51.960 "state": "completed", 00:21:51.960 "digest": "sha384", 00:21:51.960 "dhgroup": "ffdhe4096" 00:21:51.960 } 00:21:51.960 } 00:21:51.960 ]' 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.960 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.222 10:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:52.857 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.118 10:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.379 00:21:53.379 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:53.379 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:53.379 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:53.641 { 00:21:53.641 "cntlid": 79, 00:21:53.641 "qid": 0, 00:21:53.641 "state": "enabled", 00:21:53.641 "listen_address": { 00:21:53.641 "trtype": "TCP", 00:21:53.641 "adrfam": "IPv4", 00:21:53.641 "traddr": "10.0.0.2", 00:21:53.641 "trsvcid": "4420" 00:21:53.641 }, 00:21:53.641 "peer_address": { 00:21:53.641 "trtype": "TCP", 00:21:53.641 "adrfam": "IPv4", 00:21:53.641 "traddr": "10.0.0.1", 00:21:53.641 "trsvcid": "41942" 00:21:53.641 }, 00:21:53.641 "auth": { 00:21:53.641 "state": "completed", 00:21:53.641 "digest": "sha384", 00:21:53.641 "dhgroup": "ffdhe4096" 00:21:53.641 } 00:21:53.641 } 00:21:53.641 ]' 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.641 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.903 10:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.476 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:54.738 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:54.999 00:21:54.999 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:54.999 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:54.999 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:55.260 { 00:21:55.260 "cntlid": 81, 00:21:55.260 "qid": 0, 00:21:55.260 "state": "enabled", 00:21:55.260 "listen_address": { 00:21:55.260 "trtype": "TCP", 00:21:55.260 "adrfam": "IPv4", 00:21:55.260 "traddr": "10.0.0.2", 00:21:55.260 "trsvcid": "4420" 00:21:55.260 }, 00:21:55.260 "peer_address": { 00:21:55.260 "trtype": "TCP", 00:21:55.260 "adrfam": "IPv4", 00:21:55.260 "traddr": "10.0.0.1", 00:21:55.260 "trsvcid": "41974" 00:21:55.260 }, 00:21:55.260 "auth": { 00:21:55.260 "state": "completed", 00:21:55.260 "digest": "sha384", 00:21:55.260 "dhgroup": "ffdhe6144" 00:21:55.260 } 00:21:55.260 } 00:21:55.260 ]' 00:21:55.260 10:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:55.260 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.260 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:55.260 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.260 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:55.522 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.522 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.522 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.522 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:21:56.466 10:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:56.466 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:56.727 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:56.988 { 00:21:56.988 "cntlid": 83, 00:21:56.988 "qid": 0, 00:21:56.988 "state": "enabled", 00:21:56.988 "listen_address": { 00:21:56.988 "trtype": "TCP", 00:21:56.988 "adrfam": "IPv4", 00:21:56.988 "traddr": "10.0.0.2", 00:21:56.988 "trsvcid": "4420" 00:21:56.988 }, 00:21:56.988 "peer_address": { 00:21:56.988 "trtype": "TCP", 00:21:56.988 "adrfam": "IPv4", 00:21:56.988 "traddr": "10.0.0.1", 00:21:56.988 "trsvcid": "39484" 00:21:56.988 }, 00:21:56.988 "auth": { 00:21:56.988 "state": "completed", 00:21:56.988 "digest": "sha384", 00:21:56.988 "dhgroup": "ffdhe6144" 00:21:56.988 } 00:21:56.988 } 00:21:56.988 ]' 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.988 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:57.250 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.250 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:57.250 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.250 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.250 10:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.250 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:58.198 10:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:58.771 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:58.771 { 00:21:58.771 "cntlid": 85, 00:21:58.771 "qid": 0, 00:21:58.771 "state": "enabled", 00:21:58.771 "listen_address": { 00:21:58.771 "trtype": "TCP", 00:21:58.771 "adrfam": "IPv4", 00:21:58.771 "traddr": "10.0.0.2", 00:21:58.771 "trsvcid": "4420" 00:21:58.771 }, 00:21:58.771 "peer_address": { 00:21:58.771 "trtype": "TCP", 00:21:58.771 "adrfam": "IPv4", 00:21:58.771 "traddr": "10.0.0.1", 00:21:58.771 "trsvcid": "39508" 00:21:58.771 }, 00:21:58.771 "auth": { 00:21:58.771 "state": "completed", 00:21:58.771 "digest": "sha384", 00:21:58.771 "dhgroup": "ffdhe6144" 00:21:58.771 } 00:21:58.771 } 00:21:58.771 ]' 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.771 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:59.033 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.033 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:59.033 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.033 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.033 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.033 10:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:21:59.977 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.977 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.977 10:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.977 10:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.977 10:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.978 10:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:00.239 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:00.501 { 00:22:00.501 "cntlid": 87, 00:22:00.501 "qid": 0, 00:22:00.501 "state": "enabled", 00:22:00.501 "listen_address": { 00:22:00.501 "trtype": "TCP", 00:22:00.501 "adrfam": "IPv4", 00:22:00.501 "traddr": "10.0.0.2", 00:22:00.501 "trsvcid": "4420" 00:22:00.501 }, 00:22:00.501 "peer_address": { 00:22:00.501 "trtype": "TCP", 00:22:00.501 "adrfam": "IPv4", 00:22:00.501 "traddr": "10.0.0.1", 00:22:00.501 "trsvcid": "39538" 00:22:00.501 }, 00:22:00.501 "auth": { 00:22:00.501 "state": "completed", 00:22:00.501 "digest": "sha384", 00:22:00.501 "dhgroup": "ffdhe6144" 00:22:00.501 } 00:22:00.501 } 00:22:00.501 ]' 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.501 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:00.763 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.763 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:00.763 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.763 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.763 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.763 10:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:01.708 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:01.709 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:02.283 00:22:02.283 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:02.283 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:02.283 10:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:02.545 { 00:22:02.545 "cntlid": 89, 00:22:02.545 "qid": 0, 00:22:02.545 "state": "enabled", 00:22:02.545 "listen_address": { 00:22:02.545 "trtype": "TCP", 00:22:02.545 "adrfam": "IPv4", 00:22:02.545 "traddr": "10.0.0.2", 00:22:02.545 "trsvcid": "4420" 00:22:02.545 }, 00:22:02.545 "peer_address": { 00:22:02.545 "trtype": "TCP", 00:22:02.545 "adrfam": "IPv4", 00:22:02.545 "traddr": "10.0.0.1", 00:22:02.545 "trsvcid": "39554" 00:22:02.545 }, 00:22:02.545 "auth": { 00:22:02.545 "state": "completed", 00:22:02.545 "digest": "sha384", 00:22:02.545 "dhgroup": "ffdhe8192" 00:22:02.545 } 00:22:02.545 } 00:22:02.545 ]' 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.545 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.546 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.808 10:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.381 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:03.642 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:22:03.642 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:03.642 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:03.642 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:03.642 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:03.643 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:03.643 10:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:03.643 10:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.643 10:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:03.643 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:03.643 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:04.292 00:22:04.292 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:04.292 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.292 10:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:04.292 { 00:22:04.292 "cntlid": 91, 00:22:04.292 "qid": 0, 00:22:04.292 "state": "enabled", 00:22:04.292 "listen_address": { 00:22:04.292 "trtype": "TCP", 00:22:04.292 "adrfam": "IPv4", 00:22:04.292 "traddr": "10.0.0.2", 00:22:04.292 "trsvcid": "4420" 00:22:04.292 }, 00:22:04.292 "peer_address": { 00:22:04.292 "trtype": "TCP", 00:22:04.292 "adrfam": "IPv4", 00:22:04.292 "traddr": "10.0.0.1", 00:22:04.292 "trsvcid": "39578" 00:22:04.292 }, 00:22:04.292 "auth": { 00:22:04.292 "state": "completed", 00:22:04.292 "digest": "sha384", 00:22:04.292 "dhgroup": "ffdhe8192" 00:22:04.292 } 00:22:04.292 } 00:22:04.292 ]' 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.292 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:04.552 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.552 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:04.552 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.552 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.552 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.552 10:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:05.496 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:06.068 00:22:06.068 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:06.068 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:06.068 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:06.330 { 00:22:06.330 "cntlid": 93, 00:22:06.330 "qid": 0, 00:22:06.330 "state": "enabled", 00:22:06.330 "listen_address": { 00:22:06.330 "trtype": "TCP", 00:22:06.330 "adrfam": "IPv4", 00:22:06.330 "traddr": "10.0.0.2", 00:22:06.330 "trsvcid": "4420" 00:22:06.330 }, 00:22:06.330 "peer_address": { 00:22:06.330 "trtype": "TCP", 00:22:06.330 "adrfam": "IPv4", 00:22:06.330 "traddr": "10.0.0.1", 00:22:06.330 "trsvcid": "53140" 00:22:06.330 }, 00:22:06.330 "auth": { 00:22:06.330 "state": "completed", 00:22:06.330 "digest": "sha384", 00:22:06.330 "dhgroup": "ffdhe8192" 00:22:06.330 } 00:22:06.330 } 00:22:06.330 ]' 00:22:06.330 10:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.330 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.592 10:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:07.538 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.539 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:08.134 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:08.134 { 00:22:08.134 "cntlid": 95, 00:22:08.134 "qid": 0, 00:22:08.134 "state": "enabled", 00:22:08.134 "listen_address": { 00:22:08.134 "trtype": "TCP", 00:22:08.134 "adrfam": "IPv4", 00:22:08.134 "traddr": "10.0.0.2", 00:22:08.134 "trsvcid": "4420" 00:22:08.134 }, 00:22:08.134 "peer_address": { 00:22:08.134 "trtype": "TCP", 00:22:08.134 "adrfam": "IPv4", 00:22:08.134 "traddr": "10.0.0.1", 00:22:08.134 "trsvcid": "53170" 00:22:08.134 }, 00:22:08.134 "auth": { 00:22:08.134 "state": "completed", 00:22:08.134 "digest": "sha384", 00:22:08.134 "dhgroup": "ffdhe8192" 00:22:08.134 } 00:22:08.134 } 00:22:08.134 ]' 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.134 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:08.395 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:08.395 10:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:08.395 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.395 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.395 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.395 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.339 10:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:09.339 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:09.600 00:22:09.600 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:09.600 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:09.600 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:09.862 { 00:22:09.862 "cntlid": 97, 00:22:09.862 "qid": 0, 00:22:09.862 "state": "enabled", 00:22:09.862 "listen_address": { 00:22:09.862 "trtype": "TCP", 00:22:09.862 "adrfam": "IPv4", 00:22:09.862 "traddr": "10.0.0.2", 00:22:09.862 "trsvcid": "4420" 00:22:09.862 }, 00:22:09.862 "peer_address": { 00:22:09.862 "trtype": "TCP", 00:22:09.862 "adrfam": "IPv4", 00:22:09.862 "traddr": "10.0.0.1", 00:22:09.862 "trsvcid": "53204" 00:22:09.862 }, 00:22:09.862 "auth": { 00:22:09.862 "state": "completed", 00:22:09.862 "digest": "sha512", 00:22:09.862 "dhgroup": "null" 00:22:09.862 } 00:22:09.862 } 00:22:09.862 ]' 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.862 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.123 10:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:11.071 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:11.332 00:22:11.332 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:11.332 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.332 10:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:11.332 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.332 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.332 10:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.332 10:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.332 10:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.332 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:11.332 { 00:22:11.332 "cntlid": 99, 00:22:11.332 "qid": 0, 00:22:11.332 "state": "enabled", 00:22:11.332 "listen_address": { 00:22:11.333 "trtype": "TCP", 00:22:11.333 "adrfam": "IPv4", 00:22:11.333 "traddr": "10.0.0.2", 00:22:11.333 "trsvcid": "4420" 00:22:11.333 }, 00:22:11.333 "peer_address": { 00:22:11.333 "trtype": "TCP", 00:22:11.333 "adrfam": "IPv4", 00:22:11.333 "traddr": "10.0.0.1", 00:22:11.333 "trsvcid": "53230" 00:22:11.333 }, 00:22:11.333 "auth": { 00:22:11.333 "state": "completed", 00:22:11.333 "digest": "sha512", 00:22:11.333 "dhgroup": "null" 00:22:11.333 } 00:22:11.333 } 00:22:11.333 ]' 00:22:11.333 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.593 10:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:12.537 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:12.538 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:12.799 00:22:12.800 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:12.800 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:12.800 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:13.061 { 00:22:13.061 "cntlid": 101, 00:22:13.061 "qid": 0, 00:22:13.061 "state": "enabled", 00:22:13.061 "listen_address": { 00:22:13.061 "trtype": "TCP", 00:22:13.061 "adrfam": "IPv4", 00:22:13.061 "traddr": "10.0.0.2", 00:22:13.061 "trsvcid": "4420" 00:22:13.061 }, 00:22:13.061 "peer_address": { 00:22:13.061 "trtype": "TCP", 00:22:13.061 "adrfam": "IPv4", 00:22:13.061 "traddr": "10.0.0.1", 00:22:13.061 "trsvcid": "53262" 00:22:13.061 }, 00:22:13.061 "auth": { 00:22:13.061 "state": "completed", 00:22:13.061 "digest": "sha512", 00:22:13.061 "dhgroup": "null" 00:22:13.061 } 00:22:13.061 } 00:22:13.061 ]' 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.061 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.323 10:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:14.269 10:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.531 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.531 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:14.793 { 00:22:14.793 "cntlid": 103, 00:22:14.793 "qid": 0, 00:22:14.793 "state": "enabled", 00:22:14.793 "listen_address": { 00:22:14.793 "trtype": "TCP", 00:22:14.793 "adrfam": "IPv4", 00:22:14.793 "traddr": "10.0.0.2", 00:22:14.793 "trsvcid": "4420" 00:22:14.793 }, 00:22:14.793 "peer_address": { 00:22:14.793 "trtype": "TCP", 00:22:14.793 "adrfam": "IPv4", 00:22:14.793 "traddr": "10.0.0.1", 00:22:14.793 "trsvcid": "53294" 00:22:14.793 }, 00:22:14.793 "auth": { 00:22:14.793 "state": "completed", 00:22:14.793 "digest": "sha512", 00:22:14.793 "dhgroup": "null" 00:22:14.793 } 00:22:14.793 } 00:22:14.793 ]' 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:14.793 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:15.054 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.054 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.054 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.054 10:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:15.999 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:16.260 00:22:16.260 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:16.260 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:16.260 10:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:16.524 { 00:22:16.524 "cntlid": 105, 00:22:16.524 "qid": 0, 00:22:16.524 "state": "enabled", 00:22:16.524 "listen_address": { 00:22:16.524 "trtype": "TCP", 00:22:16.524 "adrfam": "IPv4", 00:22:16.524 "traddr": "10.0.0.2", 00:22:16.524 "trsvcid": "4420" 00:22:16.524 }, 00:22:16.524 "peer_address": { 00:22:16.524 "trtype": "TCP", 00:22:16.524 "adrfam": "IPv4", 00:22:16.524 "traddr": "10.0.0.1", 00:22:16.524 "trsvcid": "58948" 00:22:16.524 }, 00:22:16.524 "auth": { 00:22:16.524 "state": "completed", 00:22:16.524 "digest": "sha512", 00:22:16.524 "dhgroup": "ffdhe2048" 00:22:16.524 } 00:22:16.524 } 00:22:16.524 ]' 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.524 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.786 10:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:17.360 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:17.622 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:17.884 00:22:17.884 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:17.884 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:17.884 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:18.145 { 00:22:18.145 "cntlid": 107, 00:22:18.145 "qid": 0, 00:22:18.145 "state": "enabled", 00:22:18.145 "listen_address": { 00:22:18.145 "trtype": "TCP", 00:22:18.145 "adrfam": "IPv4", 00:22:18.145 "traddr": "10.0.0.2", 00:22:18.145 "trsvcid": "4420" 00:22:18.145 }, 00:22:18.145 "peer_address": { 00:22:18.145 "trtype": "TCP", 00:22:18.145 "adrfam": "IPv4", 00:22:18.145 "traddr": "10.0.0.1", 00:22:18.145 "trsvcid": "58970" 00:22:18.145 }, 00:22:18.145 "auth": { 00:22:18.145 "state": "completed", 00:22:18.145 "digest": "sha512", 00:22:18.145 "dhgroup": "ffdhe2048" 00:22:18.145 } 00:22:18.145 } 00:22:18.145 ]' 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.145 10:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.407 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:19.353 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:19.354 10:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.354 10:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.354 10:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.354 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.354 10:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:19.615 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:19.615 { 00:22:19.615 "cntlid": 109, 00:22:19.615 "qid": 0, 00:22:19.615 "state": "enabled", 00:22:19.615 "listen_address": { 00:22:19.615 "trtype": "TCP", 00:22:19.615 "adrfam": "IPv4", 00:22:19.615 "traddr": "10.0.0.2", 00:22:19.615 "trsvcid": "4420" 00:22:19.615 }, 00:22:19.615 "peer_address": { 00:22:19.615 "trtype": "TCP", 00:22:19.615 "adrfam": "IPv4", 00:22:19.615 "traddr": "10.0.0.1", 00:22:19.615 "trsvcid": "58996" 00:22:19.615 }, 00:22:19.615 "auth": { 00:22:19.615 "state": "completed", 00:22:19.615 "digest": "sha512", 00:22:19.615 "dhgroup": "ffdhe2048" 00:22:19.615 } 00:22:19.615 } 00:22:19.615 ]' 00:22:19.615 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.877 10:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.823 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.084 00:22:21.084 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:21.084 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.084 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:21.345 { 00:22:21.345 "cntlid": 111, 00:22:21.345 "qid": 0, 00:22:21.345 "state": "enabled", 00:22:21.345 "listen_address": { 00:22:21.345 "trtype": "TCP", 00:22:21.345 "adrfam": "IPv4", 00:22:21.345 "traddr": "10.0.0.2", 00:22:21.345 "trsvcid": "4420" 00:22:21.345 }, 00:22:21.345 "peer_address": { 00:22:21.345 "trtype": "TCP", 00:22:21.345 "adrfam": "IPv4", 00:22:21.345 "traddr": "10.0.0.1", 00:22:21.345 "trsvcid": "59016" 00:22:21.345 }, 00:22:21.345 "auth": { 00:22:21.345 "state": "completed", 00:22:21.345 "digest": "sha512", 00:22:21.345 "dhgroup": "ffdhe2048" 00:22:21.345 } 00:22:21.345 } 00:22:21.345 ]' 00:22:21.345 10:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.345 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.606 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:22.549 10:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.549 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.824 00:22:22.824 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:22.825 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:22.825 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:23.100 { 00:22:23.100 "cntlid": 113, 00:22:23.100 "qid": 0, 00:22:23.100 "state": "enabled", 00:22:23.100 "listen_address": { 00:22:23.100 "trtype": "TCP", 00:22:23.100 "adrfam": "IPv4", 00:22:23.100 "traddr": "10.0.0.2", 00:22:23.100 "trsvcid": "4420" 00:22:23.100 }, 00:22:23.100 "peer_address": { 00:22:23.100 "trtype": "TCP", 00:22:23.100 "adrfam": "IPv4", 00:22:23.100 "traddr": "10.0.0.1", 00:22:23.100 "trsvcid": "59028" 00:22:23.100 }, 00:22:23.100 "auth": { 00:22:23.100 "state": "completed", 00:22:23.100 "digest": "sha512", 00:22:23.100 "dhgroup": "ffdhe3072" 00:22:23.100 } 00:22:23.100 } 00:22:23.100 ]' 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.100 10:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:24.046 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:24.307 10:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:24.569 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:24.569 { 00:22:24.569 "cntlid": 115, 00:22:24.569 "qid": 0, 00:22:24.569 "state": "enabled", 00:22:24.569 "listen_address": { 00:22:24.569 "trtype": "TCP", 00:22:24.569 "adrfam": "IPv4", 00:22:24.569 "traddr": "10.0.0.2", 00:22:24.569 "trsvcid": "4420" 00:22:24.569 }, 00:22:24.569 "peer_address": { 00:22:24.569 "trtype": "TCP", 00:22:24.569 "adrfam": "IPv4", 00:22:24.569 "traddr": "10.0.0.1", 00:22:24.569 "trsvcid": "59056" 00:22:24.569 }, 00:22:24.569 "auth": { 00:22:24.569 "state": "completed", 00:22:24.569 "digest": "sha512", 00:22:24.569 "dhgroup": "ffdhe3072" 00:22:24.569 } 00:22:24.569 } 00:22:24.569 ]' 00:22:24.569 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.830 10:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:25.775 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:26.036 00:22:26.036 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:26.036 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:26.036 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:26.297 { 00:22:26.297 "cntlid": 117, 00:22:26.297 "qid": 0, 00:22:26.297 "state": "enabled", 00:22:26.297 "listen_address": { 00:22:26.297 "trtype": "TCP", 00:22:26.297 "adrfam": "IPv4", 00:22:26.297 "traddr": "10.0.0.2", 00:22:26.297 "trsvcid": "4420" 00:22:26.297 }, 00:22:26.297 "peer_address": { 00:22:26.297 "trtype": "TCP", 00:22:26.297 "adrfam": "IPv4", 00:22:26.297 "traddr": "10.0.0.1", 00:22:26.297 "trsvcid": "44866" 00:22:26.297 }, 00:22:26.297 "auth": { 00:22:26.297 "state": "completed", 00:22:26.297 "digest": "sha512", 00:22:26.297 "dhgroup": "ffdhe3072" 00:22:26.297 } 00:22:26.297 } 00:22:26.297 ]' 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.297 10:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:26.297 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.297 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:26.297 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.297 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.297 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.558 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:27.501 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.502 10:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.502 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.762 00:22:27.762 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:27.762 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:27.762 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.023 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.023 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.023 10:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.023 10:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.023 10:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.023 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:28.023 { 00:22:28.023 "cntlid": 119, 00:22:28.023 "qid": 0, 00:22:28.023 "state": "enabled", 00:22:28.023 "listen_address": { 00:22:28.023 "trtype": "TCP", 00:22:28.023 "adrfam": "IPv4", 00:22:28.023 "traddr": "10.0.0.2", 00:22:28.023 "trsvcid": "4420" 00:22:28.023 }, 00:22:28.023 "peer_address": { 00:22:28.023 "trtype": "TCP", 00:22:28.023 "adrfam": "IPv4", 00:22:28.024 "traddr": "10.0.0.1", 00:22:28.024 "trsvcid": "44884" 00:22:28.024 }, 00:22:28.024 "auth": { 00:22:28.024 "state": "completed", 00:22:28.024 "digest": "sha512", 00:22:28.024 "dhgroup": "ffdhe3072" 00:22:28.024 } 00:22:28.024 } 00:22:28.024 ]' 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.024 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.285 10:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:28.857 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:29.119 10:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:29.381 00:22:29.381 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:29.381 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:29.381 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:29.642 { 00:22:29.642 "cntlid": 121, 00:22:29.642 "qid": 0, 00:22:29.642 "state": "enabled", 00:22:29.642 "listen_address": { 00:22:29.642 "trtype": "TCP", 00:22:29.642 "adrfam": "IPv4", 00:22:29.642 "traddr": "10.0.0.2", 00:22:29.642 "trsvcid": "4420" 00:22:29.642 }, 00:22:29.642 "peer_address": { 00:22:29.642 "trtype": "TCP", 00:22:29.642 "adrfam": "IPv4", 00:22:29.642 "traddr": "10.0.0.1", 00:22:29.642 "trsvcid": "44908" 00:22:29.642 }, 00:22:29.642 "auth": { 00:22:29.642 "state": "completed", 00:22:29.642 "digest": "sha512", 00:22:29.642 "dhgroup": "ffdhe4096" 00:22:29.642 } 00:22:29.642 } 00:22:29.642 ]' 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.642 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.903 10:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:30.486 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:30.747 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:30.748 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:31.009 00:22:31.009 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:31.009 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:31.009 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.270 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.270 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.270 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:31.270 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.270 10:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:31.270 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:31.270 { 00:22:31.271 "cntlid": 123, 00:22:31.271 "qid": 0, 00:22:31.271 "state": "enabled", 00:22:31.271 "listen_address": { 00:22:31.271 "trtype": "TCP", 00:22:31.271 "adrfam": "IPv4", 00:22:31.271 "traddr": "10.0.0.2", 00:22:31.271 "trsvcid": "4420" 00:22:31.271 }, 00:22:31.271 "peer_address": { 00:22:31.271 "trtype": "TCP", 00:22:31.271 "adrfam": "IPv4", 00:22:31.271 "traddr": "10.0.0.1", 00:22:31.271 "trsvcid": "44936" 00:22:31.271 }, 00:22:31.271 "auth": { 00:22:31.271 "state": "completed", 00:22:31.271 "digest": "sha512", 00:22:31.271 "dhgroup": "ffdhe4096" 00:22:31.271 } 00:22:31.271 } 00:22:31.271 ]' 00:22:31.271 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:31.271 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.271 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:31.271 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:31.271 10:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:31.271 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.271 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.271 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.532 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:32.475 10:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:32.475 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:32.736 00:22:32.736 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:32.736 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:32.736 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.736 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:32.997 { 00:22:32.997 "cntlid": 125, 00:22:32.997 "qid": 0, 00:22:32.997 "state": "enabled", 00:22:32.997 "listen_address": { 00:22:32.997 "trtype": "TCP", 00:22:32.997 "adrfam": "IPv4", 00:22:32.997 "traddr": "10.0.0.2", 00:22:32.997 "trsvcid": "4420" 00:22:32.997 }, 00:22:32.997 "peer_address": { 00:22:32.997 "trtype": "TCP", 00:22:32.997 "adrfam": "IPv4", 00:22:32.997 "traddr": "10.0.0.1", 00:22:32.997 "trsvcid": "44972" 00:22:32.997 }, 00:22:32.997 "auth": { 00:22:32.997 "state": "completed", 00:22:32.997 "digest": "sha512", 00:22:32.997 "dhgroup": "ffdhe4096" 00:22:32.997 } 00:22:32.997 } 00:22:32.997 ]' 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.997 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.257 10:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:33.829 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.090 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:34.351 00:22:34.351 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:34.351 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:34.351 10:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:34.351 { 00:22:34.351 "cntlid": 127, 00:22:34.351 "qid": 0, 00:22:34.351 "state": "enabled", 00:22:34.351 "listen_address": { 00:22:34.351 "trtype": "TCP", 00:22:34.351 "adrfam": "IPv4", 00:22:34.351 "traddr": "10.0.0.2", 00:22:34.351 "trsvcid": "4420" 00:22:34.351 }, 00:22:34.351 "peer_address": { 00:22:34.351 "trtype": "TCP", 00:22:34.351 "adrfam": "IPv4", 00:22:34.351 "traddr": "10.0.0.1", 00:22:34.351 "trsvcid": "44992" 00:22:34.351 }, 00:22:34.351 "auth": { 00:22:34.351 "state": "completed", 00:22:34.351 "digest": "sha512", 00:22:34.351 "dhgroup": "ffdhe4096" 00:22:34.351 } 00:22:34.351 } 00:22:34.351 ]' 00:22:34.351 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:34.612 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:34.613 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:34.613 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:34.613 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:34.613 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.613 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.613 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.874 10:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.445 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:35.446 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.446 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:35.706 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:35.966 00:22:35.966 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:35.966 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.966 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:36.227 { 00:22:36.227 "cntlid": 129, 00:22:36.227 "qid": 0, 00:22:36.227 "state": "enabled", 00:22:36.227 "listen_address": { 00:22:36.227 "trtype": "TCP", 00:22:36.227 "adrfam": "IPv4", 00:22:36.227 "traddr": "10.0.0.2", 00:22:36.227 "trsvcid": "4420" 00:22:36.227 }, 00:22:36.227 "peer_address": { 00:22:36.227 "trtype": "TCP", 00:22:36.227 "adrfam": "IPv4", 00:22:36.227 "traddr": "10.0.0.1", 00:22:36.227 "trsvcid": "51128" 00:22:36.227 }, 00:22:36.227 "auth": { 00:22:36.227 "state": "completed", 00:22:36.227 "digest": "sha512", 00:22:36.227 "dhgroup": "ffdhe6144" 00:22:36.227 } 00:22:36.227 } 00:22:36.227 ]' 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:36.227 10:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:36.488 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.488 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.488 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.488 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.428 10:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:37.428 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:37.691 00:22:37.691 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:37.691 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.691 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:37.957 { 00:22:37.957 "cntlid": 131, 00:22:37.957 "qid": 0, 00:22:37.957 "state": "enabled", 00:22:37.957 "listen_address": { 00:22:37.957 "trtype": "TCP", 00:22:37.957 "adrfam": "IPv4", 00:22:37.957 "traddr": "10.0.0.2", 00:22:37.957 "trsvcid": "4420" 00:22:37.957 }, 00:22:37.957 "peer_address": { 00:22:37.957 "trtype": "TCP", 00:22:37.957 "adrfam": "IPv4", 00:22:37.957 "traddr": "10.0.0.1", 00:22:37.957 "trsvcid": "51148" 00:22:37.957 }, 00:22:37.957 "auth": { 00:22:37.957 "state": "completed", 00:22:37.957 "digest": "sha512", 00:22:37.957 "dhgroup": "ffdhe6144" 00:22:37.957 } 00:22:37.957 } 00:22:37.957 ]' 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:37.957 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:38.218 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.218 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.218 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.218 10:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:39.162 10:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:39.423 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:39.684 { 00:22:39.684 "cntlid": 133, 00:22:39.684 "qid": 0, 00:22:39.684 "state": "enabled", 00:22:39.684 "listen_address": { 00:22:39.684 "trtype": "TCP", 00:22:39.684 "adrfam": "IPv4", 00:22:39.684 "traddr": "10.0.0.2", 00:22:39.684 "trsvcid": "4420" 00:22:39.684 }, 00:22:39.684 "peer_address": { 00:22:39.684 "trtype": "TCP", 00:22:39.684 "adrfam": "IPv4", 00:22:39.684 "traddr": "10.0.0.1", 00:22:39.684 "trsvcid": "51172" 00:22:39.684 }, 00:22:39.684 "auth": { 00:22:39.684 "state": "completed", 00:22:39.684 "digest": "sha512", 00:22:39.684 "dhgroup": "ffdhe6144" 00:22:39.684 } 00:22:39.684 } 00:22:39.684 ]' 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:39.684 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:39.946 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:39.946 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.946 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.946 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.946 10:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:40.892 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:41.464 00:22:41.464 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:41.464 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:41.464 10:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:41.464 { 00:22:41.464 "cntlid": 135, 00:22:41.464 "qid": 0, 00:22:41.464 "state": "enabled", 00:22:41.464 "listen_address": { 00:22:41.464 "trtype": "TCP", 00:22:41.464 "adrfam": "IPv4", 00:22:41.464 "traddr": "10.0.0.2", 00:22:41.464 "trsvcid": "4420" 00:22:41.464 }, 00:22:41.464 "peer_address": { 00:22:41.464 "trtype": "TCP", 00:22:41.464 "adrfam": "IPv4", 00:22:41.464 "traddr": "10.0.0.1", 00:22:41.464 "trsvcid": "51204" 00:22:41.464 }, 00:22:41.464 "auth": { 00:22:41.464 "state": "completed", 00:22:41.464 "digest": "sha512", 00:22:41.464 "dhgroup": "ffdhe6144" 00:22:41.464 } 00:22:41.464 } 00:22:41.464 ]' 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:41.464 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:41.726 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:41.726 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.726 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.726 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.726 10:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:42.298 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:42.560 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:43.133 00:22:43.133 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:43.133 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:43.133 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:43.394 { 00:22:43.394 "cntlid": 137, 00:22:43.394 "qid": 0, 00:22:43.394 "state": "enabled", 00:22:43.394 "listen_address": { 00:22:43.394 "trtype": "TCP", 00:22:43.394 "adrfam": "IPv4", 00:22:43.394 "traddr": "10.0.0.2", 00:22:43.394 "trsvcid": "4420" 00:22:43.394 }, 00:22:43.394 "peer_address": { 00:22:43.394 "trtype": "TCP", 00:22:43.394 "adrfam": "IPv4", 00:22:43.394 "traddr": "10.0.0.1", 00:22:43.394 "trsvcid": "51230" 00:22:43.394 }, 00:22:43.394 "auth": { 00:22:43.394 "state": "completed", 00:22:43.394 "digest": "sha512", 00:22:43.394 "dhgroup": "ffdhe8192" 00:22:43.394 } 00:22:43.394 } 00:22:43.394 ]' 00:22:43.394 10:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.394 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.656 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:44.228 10:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:44.489 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:45.061 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:45.061 { 00:22:45.061 "cntlid": 139, 00:22:45.061 "qid": 0, 00:22:45.061 "state": "enabled", 00:22:45.061 "listen_address": { 00:22:45.061 "trtype": "TCP", 00:22:45.061 "adrfam": "IPv4", 00:22:45.061 "traddr": "10.0.0.2", 00:22:45.061 "trsvcid": "4420" 00:22:45.061 }, 00:22:45.061 "peer_address": { 00:22:45.061 "trtype": "TCP", 00:22:45.061 "adrfam": "IPv4", 00:22:45.061 "traddr": "10.0.0.1", 00:22:45.061 "trsvcid": "51252" 00:22:45.061 }, 00:22:45.061 "auth": { 00:22:45.061 "state": "completed", 00:22:45.061 "digest": "sha512", 00:22:45.061 "dhgroup": "ffdhe8192" 00:22:45.061 } 00:22:45.061 } 00:22:45.061 ]' 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.061 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:45.323 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.323 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.323 10:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.323 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:ZmQ0ZWU4MDllNTZkYTE3MTMwMWIzN2M1MDIyODVhNjZO6JA0: 00:22:46.268 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.268 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.268 10:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.269 10:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:46.842 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:46.842 { 00:22:46.842 "cntlid": 141, 00:22:46.842 "qid": 0, 00:22:46.842 "state": "enabled", 00:22:46.842 "listen_address": { 00:22:46.842 "trtype": "TCP", 00:22:46.842 "adrfam": "IPv4", 00:22:46.842 "traddr": "10.0.0.2", 00:22:46.842 "trsvcid": "4420" 00:22:46.842 }, 00:22:46.842 "peer_address": { 00:22:46.842 "trtype": "TCP", 00:22:46.842 "adrfam": "IPv4", 00:22:46.842 "traddr": "10.0.0.1", 00:22:46.842 "trsvcid": "37760" 00:22:46.842 }, 00:22:46.842 "auth": { 00:22:46.842 "state": "completed", 00:22:46.842 "digest": "sha512", 00:22:46.842 "dhgroup": "ffdhe8192" 00:22:46.842 } 00:22:46.842 } 00:22:46.842 ]' 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.842 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:47.104 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.104 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:47.104 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.104 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.104 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.104 10:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:N2ZjMzNmMTI5ODcyNTAyMmI4ZThiNDU5YTA1Yjk5NTYwNWFhNjllNmNjZjI2MzEwEGoFqA==: 00:22:47.677 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.939 10:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:47.940 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:47.940 10:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.513 00:22:48.513 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:48.513 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:48.513 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:48.775 { 00:22:48.775 "cntlid": 143, 00:22:48.775 "qid": 0, 00:22:48.775 "state": "enabled", 00:22:48.775 "listen_address": { 00:22:48.775 "trtype": "TCP", 00:22:48.775 "adrfam": "IPv4", 00:22:48.775 "traddr": "10.0.0.2", 00:22:48.775 "trsvcid": "4420" 00:22:48.775 }, 00:22:48.775 "peer_address": { 00:22:48.775 "trtype": "TCP", 00:22:48.775 "adrfam": "IPv4", 00:22:48.775 "traddr": "10.0.0.1", 00:22:48.775 "trsvcid": "37800" 00:22:48.775 }, 00:22:48.775 "auth": { 00:22:48.775 "state": "completed", 00:22:48.775 "digest": "sha512", 00:22:48.775 "dhgroup": "ffdhe8192" 00:22:48.775 } 00:22:48.775 } 00:22:48.775 ]' 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:48.775 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:48.776 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:48.776 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.776 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.776 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.038 10:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:ODQyZDQwYzYwNWUyOTE1MWI3YzA4MzlkYjEyZTQ3YTcxOTAzZjVlNTk1YWE3MWE1ZTBiNzA2YjE1OGYwNTQyYTPq7Gc=: 00:22:49.611 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:49.872 10:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:50.445 00:22:50.445 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:50.445 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.445 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:50.706 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.706 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.706 10:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:50.706 10:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.706 10:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:50.706 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:50.706 { 00:22:50.706 "cntlid": 145, 00:22:50.706 "qid": 0, 00:22:50.706 "state": "enabled", 00:22:50.706 "listen_address": { 00:22:50.706 "trtype": "TCP", 00:22:50.706 "adrfam": "IPv4", 00:22:50.706 "traddr": "10.0.0.2", 00:22:50.706 "trsvcid": "4420" 00:22:50.707 }, 00:22:50.707 "peer_address": { 00:22:50.707 "trtype": "TCP", 00:22:50.707 "adrfam": "IPv4", 00:22:50.707 "traddr": "10.0.0.1", 00:22:50.707 "trsvcid": "37820" 00:22:50.707 }, 00:22:50.707 "auth": { 00:22:50.707 "state": "completed", 00:22:50.707 "digest": "sha512", 00:22:50.707 "dhgroup": "ffdhe8192" 00:22:50.707 } 00:22:50.707 } 00:22:50.707 ]' 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.707 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.968 10:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:Njk0MTViYjg0YjM4NDUxNjc4OWM0MzRmOThkYjZkYTU2NDBjN2U1MzIwOThkYzdm5wiUqg==: 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:51.540 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:51.541 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:52.116 request: 00:22:52.116 { 00:22:52.116 "name": "nvme0", 00:22:52.116 "trtype": "tcp", 00:22:52.116 "traddr": "10.0.0.2", 00:22:52.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:52.116 "adrfam": "ipv4", 00:22:52.116 "trsvcid": "4420", 00:22:52.116 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:52.116 "dhchap_key": "key2", 00:22:52.116 "method": "bdev_nvme_attach_controller", 00:22:52.116 "req_id": 1 00:22:52.116 } 00:22:52.116 Got JSON-RPC error response 00:22:52.116 response: 00:22:52.116 { 00:22:52.116 "code": -32602, 00:22:52.116 "message": "Invalid parameters" 00:22:52.116 } 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2840476 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2840476 ']' 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2840476 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2840476 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2840476' 00:22:52.116 killing process with pid 2840476 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2840476 00:22:52.116 10:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2840476 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.437 10:16:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.437 rmmod nvme_tcp 00:22:52.437 rmmod nvme_fabrics 00:22:52.437 rmmod nvme_keyring 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2840159 ']' 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2840159 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2840159 ']' 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2840159 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2840159 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:22:52.437 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2840159' 00:22:52.437 killing process with pid 2840159 00:22:52.438 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2840159 00:22:52.438 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2840159 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.698 10:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.614 10:16:40 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.614 10:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vZV /tmp/spdk.key-sha256.E7x /tmp/spdk.key-sha384.2ye /tmp/spdk.key-sha512.MQA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:54.614 00:22:54.614 real 2m17.219s 00:22:54.614 user 5m3.463s 00:22:54.614 sys 0m20.627s 00:22:54.615 10:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:22:54.615 10:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.615 ************************************ 00:22:54.615 END TEST nvmf_auth_target 00:22:54.615 ************************************ 00:22:54.615 10:16:40 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:54.615 10:16:40 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:54.615 10:16:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:22:54.615 10:16:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:22:54.615 10:16:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.615 ************************************ 00:22:54.615 START TEST nvmf_bdevio_no_huge 00:22:54.615 ************************************ 00:22:54.615 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:54.876 * Looking for test storage... 00:22:54.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.876 10:16:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:01.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:01.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:01.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:01.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.504 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:23:01.505 00:23:01.505 --- 10.0.0.2 ping statistics --- 00:23:01.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.505 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:23:01.505 00:23:01.505 --- 10.0.0.1 ping statistics --- 00:23:01.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.505 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.505 10:16:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2870583 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2870583 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 2870583 ']' 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:01.505 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:01.505 [2024-05-15 10:16:47.072917] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:01.505 [2024-05-15 10:16:47.072991] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:01.505 [2024-05-15 10:16:47.162789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.505 [2024-05-15 10:16:47.244122] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.505 [2024-05-15 10:16:47.244178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.505 [2024-05-15 10:16:47.244186] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.505 [2024-05-15 10:16:47.244194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.505 [2024-05-15 10:16:47.244200] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.505 [2024-05-15 10:16:47.244369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.505 [2024-05-15 10:16:47.244530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:01.505 [2024-05-15 10:16:47.244689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:01.505 [2024-05-15 10:16:47.244690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.080 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:02.080 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:23:02.080 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.080 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:02.080 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 [2024-05-15 10:16:47.909215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 Malloc0 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 [2024-05-15 10:16:47.962718] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:02.343 [2024-05-15 10:16:47.963061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:02.343 { 00:23:02.343 "params": { 00:23:02.343 "name": "Nvme$subsystem", 00:23:02.343 "trtype": "$TEST_TRANSPORT", 00:23:02.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:02.343 "adrfam": "ipv4", 00:23:02.343 "trsvcid": "$NVMF_PORT", 00:23:02.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:02.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:02.343 "hdgst": ${hdgst:-false}, 00:23:02.343 "ddgst": ${ddgst:-false} 00:23:02.343 }, 00:23:02.343 "method": "bdev_nvme_attach_controller" 00:23:02.343 } 00:23:02.343 EOF 00:23:02.343 )") 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:02.343 10:16:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:02.343 "params": { 00:23:02.343 "name": "Nvme1", 00:23:02.343 "trtype": "tcp", 00:23:02.343 "traddr": "10.0.0.2", 00:23:02.343 "adrfam": "ipv4", 00:23:02.343 "trsvcid": "4420", 00:23:02.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.343 "hdgst": false, 00:23:02.343 "ddgst": false 00:23:02.343 }, 00:23:02.343 "method": "bdev_nvme_attach_controller" 00:23:02.343 }' 00:23:02.343 [2024-05-15 10:16:48.024071] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:02.343 [2024-05-15 10:16:48.024162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2870719 ] 00:23:02.343 [2024-05-15 10:16:48.091586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:02.604 [2024-05-15 10:16:48.163582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.604 [2024-05-15 10:16:48.163700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.604 [2024-05-15 10:16:48.163702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.604 I/O targets: 00:23:02.604 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:02.604 00:23:02.604 00:23:02.604 CUnit - A unit testing framework for C - Version 2.1-3 00:23:02.604 http://cunit.sourceforge.net/ 00:23:02.604 00:23:02.604 00:23:02.604 Suite: bdevio tests on: Nvme1n1 00:23:02.604 Test: blockdev write read block ...passed 00:23:02.604 Test: blockdev write zeroes read block ...passed 00:23:02.865 Test: blockdev write zeroes read no split ...passed 00:23:02.865 Test: blockdev write zeroes read split ...passed 00:23:02.865 Test: blockdev write zeroes read split partial ...passed 00:23:02.865 Test: blockdev reset ...[2024-05-15 10:16:48.484106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.865 [2024-05-15 10:16:48.484162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x135e390 (9): Bad file descriptor 00:23:02.865 [2024-05-15 10:16:48.502359] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:02.865 passed 00:23:02.865 Test: blockdev write read 8 blocks ...passed 00:23:02.865 Test: blockdev write read size > 128k ...passed 00:23:02.865 Test: blockdev write read invalid size ...passed 00:23:02.865 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:02.865 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:02.865 Test: blockdev write read max offset ...passed 00:23:03.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:03.127 Test: blockdev writev readv 8 blocks ...passed 00:23:03.127 Test: blockdev writev readv 30 x 1block ...passed 00:23:03.127 Test: blockdev writev readv block ...passed 00:23:03.127 Test: blockdev writev readv size > 128k ...passed 00:23:03.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:03.127 Test: blockdev comparev and writev ...[2024-05-15 10:16:48.747992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.748018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.748029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.748035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.748786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.748796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.748807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.748812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.749518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.749527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.749538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.749545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.750263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.750272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.750281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:03.127 [2024-05-15 10:16:48.750286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:03.127 passed 00:23:03.127 Test: blockdev nvme passthru rw ...passed 00:23:03.127 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:16:48.836696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.127 [2024-05-15 10:16:48.836712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.837269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.127 [2024-05-15 10:16:48.837277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.837860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.127 [2024-05-15 10:16:48.837868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:03.127 [2024-05-15 10:16:48.838435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:03.128 [2024-05-15 10:16:48.838444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:03.128 passed 00:23:03.128 Test: blockdev nvme admin passthru ...passed 00:23:03.128 Test: blockdev copy ...passed 00:23:03.128 00:23:03.128 Run Summary: Type Total Ran Passed Failed Inactive 00:23:03.128 suites 1 1 n/a 0 0 00:23:03.128 tests 23 23 23 0 0 00:23:03.128 asserts 152 152 152 0 n/a 00:23:03.128 00:23:03.128 Elapsed time = 1.192 seconds 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.389 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.389 rmmod nvme_tcp 00:23:03.389 rmmod nvme_fabrics 00:23:03.651 rmmod nvme_keyring 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2870583 ']' 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2870583 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 2870583 ']' 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 2870583 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2870583 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2870583' 00:23:03.651 killing process with pid 2870583 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 2870583 00:23:03.651 [2024-05-15 10:16:49.276119] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:03.651 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 2870583 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.912 10:16:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.461 10:16:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.461 00:23:06.461 real 0m11.311s 00:23:06.461 user 0m12.737s 00:23:06.461 sys 0m5.961s 00:23:06.461 10:16:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:06.461 10:16:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:06.461 ************************************ 00:23:06.461 END TEST nvmf_bdevio_no_huge 00:23:06.461 ************************************ 00:23:06.461 10:16:51 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:06.461 10:16:51 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:06.461 10:16:51 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:06.461 10:16:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.461 ************************************ 00:23:06.461 START TEST nvmf_tls 00:23:06.461 ************************************ 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:06.461 * Looking for test storage... 00:23:06.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.461 10:16:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:13.055 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:13.055 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:13.055 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:13.055 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:13.055 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:13.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:23:13.317 00:23:13.317 --- 10.0.0.2 ping statistics --- 00:23:13.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.317 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:23:13.317 00:23:13.317 --- 10.0.0.1 ping statistics --- 00:23:13.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.317 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:13.317 10:16:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2875056 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2875056 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2875056 ']' 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:13.317 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.317 [2024-05-15 10:16:59.090682] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:13.317 [2024-05-15 10:16:59.090735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.579 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.579 [2024-05-15 10:16:59.172770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.579 [2024-05-15 10:16:59.203180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.579 [2024-05-15 10:16:59.203219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.579 [2024-05-15 10:16:59.203226] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.579 [2024-05-15 10:16:59.203236] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.579 [2024-05-15 10:16:59.203242] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.579 [2024-05-15 10:16:59.203264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:14.155 10:16:59 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:14.435 true 00:23:14.435 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:14.435 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:14.705 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:14.705 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:14.705 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:14.706 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:14.706 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:14.968 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:14.968 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:14.968 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:15.229 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:15.229 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:15.230 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:15.230 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:15.230 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:15.230 10:17:00 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:15.492 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:15.492 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:15.492 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:15.753 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:15.753 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:15.753 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:15.753 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:15.753 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:16.060 10:17:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.8Gp2Odtyp3 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.NkFvWk76ti 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.8Gp2Odtyp3 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NkFvWk76ti 00:23:16.322 10:17:01 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:16.322 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:16.585 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.8Gp2Odtyp3 00:23:16.585 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8Gp2Odtyp3 00:23:16.585 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.847 [2024-05-15 10:17:02.492449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.847 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:17.108 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:17.108 [2024-05-15 10:17:02.789156] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:17.108 [2024-05-15 10:17:02.789198] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.108 [2024-05-15 10:17:02.789364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.108 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:17.369 malloc0 00:23:17.369 10:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.369 10:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Gp2Odtyp3 00:23:17.630 [2024-05-15 10:17:03.264307] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:17.630 10:17:03 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8Gp2Odtyp3 00:23:17.630 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.643 Initializing NVMe Controllers 00:23:27.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:27.643 Initialization complete. Launching workers. 00:23:27.643 ======================================================== 00:23:27.643 Latency(us) 00:23:27.643 Device Information : IOPS MiB/s Average min max 00:23:27.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19053.07 74.43 3359.04 1147.32 5142.12 00:23:27.643 ======================================================== 00:23:27.643 Total : 19053.07 74.43 3359.04 1147.32 5142.12 00:23:27.643 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Gp2Odtyp3 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8Gp2Odtyp3' 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2877918 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2877918 /var/tmp/bdevperf.sock 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2877918 ']' 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.643 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:27.644 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.644 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:27.644 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:27.644 [2024-05-15 10:17:13.434832] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:27.644 [2024-05-15 10:17:13.434889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2877918 ] 00:23:27.905 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.905 [2024-05-15 10:17:13.484567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.905 [2024-05-15 10:17:13.512563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.905 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:27.905 10:17:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:27.905 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Gp2Odtyp3 00:23:28.166 [2024-05-15 10:17:13.715018] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.166 [2024-05-15 10:17:13.715072] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.166 TLSTESTn1 00:23:28.166 10:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:28.166 Running I/O for 10 seconds... 00:23:40.408 00:23:40.408 Latency(us) 00:23:40.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.408 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.408 Verification LBA range: start 0x0 length 0x2000 00:23:40.408 TLSTESTn1 : 10.09 1362.34 5.32 0.00 0.00 93599.88 5843.63 168645.97 00:23:40.408 =================================================================================================================== 00:23:40.408 Total : 1362.34 5.32 0.00 0.00 93599.88 5843.63 168645.97 00:23:40.408 0 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2877918 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2877918 ']' 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2877918 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2877918 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2877918' 00:23:40.408 killing process with pid 2877918 00:23:40.408 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2877918 00:23:40.408 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.408 00:23:40.408 Latency(us) 00:23:40.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.408 =================================================================================================================== 00:23:40.408 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.408 [2024-05-15 10:17:24.098006] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2877918 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NkFvWk76ti 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NkFvWk76ti 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NkFvWk76ti 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NkFvWk76ti' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2880098 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2880098 /var/tmp/bdevperf.sock 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2880098 ']' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.409 [2024-05-15 10:17:24.254124] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:40.409 [2024-05-15 10:17:24.254180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880098 ] 00:23:40.409 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.409 [2024-05-15 10:17:24.303307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.409 [2024-05-15 10:17:24.331127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NkFvWk76ti 00:23:40.409 [2024-05-15 10:17:24.533414] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.409 [2024-05-15 10:17:24.533469] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.409 [2024-05-15 10:17:24.541713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:40.409 [2024-05-15 10:17:24.541918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcf3e0 (107): Transport endpoint is not connected 00:23:40.409 [2024-05-15 10:17:24.542784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcf3e0 (9): Bad file descriptor 00:23:40.409 [2024-05-15 10:17:24.543785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:40.409 [2024-05-15 10:17:24.543792] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:40.409 [2024-05-15 10:17:24.543799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:40.409 request: 00:23:40.409 { 00:23:40.409 "name": "TLSTEST", 00:23:40.409 "trtype": "tcp", 00:23:40.409 "traddr": "10.0.0.2", 00:23:40.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.409 "adrfam": "ipv4", 00:23:40.409 "trsvcid": "4420", 00:23:40.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.409 "psk": "/tmp/tmp.NkFvWk76ti", 00:23:40.409 "method": "bdev_nvme_attach_controller", 00:23:40.409 "req_id": 1 00:23:40.409 } 00:23:40.409 Got JSON-RPC error response 00:23:40.409 response: 00:23:40.409 { 00:23:40.409 "code": -32602, 00:23:40.409 "message": "Invalid parameters" 00:23:40.409 } 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2880098 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2880098 ']' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2880098 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2880098 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2880098' 00:23:40.409 killing process with pid 2880098 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2880098 00:23:40.409 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.409 00:23:40.409 Latency(us) 00:23:40.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.409 =================================================================================================================== 00:23:40.409 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.409 [2024-05-15 10:17:24.628809] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2880098 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Gp2Odtyp3 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Gp2Odtyp3 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Gp2Odtyp3 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8Gp2Odtyp3' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2880128 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2880128 /var/tmp/bdevperf.sock 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2880128 ']' 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.409 [2024-05-15 10:17:24.784087] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:40.409 [2024-05-15 10:17:24.784158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880128 ] 00:23:40.409 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.409 [2024-05-15 10:17:24.835333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.409 [2024-05-15 10:17:24.861007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:40.409 10:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.8Gp2Odtyp3 00:23:40.409 [2024-05-15 10:17:25.075248] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.409 [2024-05-15 10:17:25.075315] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.409 [2024-05-15 10:17:25.082948] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:40.410 [2024-05-15 10:17:25.082965] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:40.410 [2024-05-15 10:17:25.082985] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:40.410 [2024-05-15 10:17:25.083650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae23e0 (107): Transport endpoint is not connected 00:23:40.410 [2024-05-15 10:17:25.084645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae23e0 (9): Bad file descriptor 00:23:40.410 [2024-05-15 10:17:25.085647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:40.410 [2024-05-15 10:17:25.085653] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:40.410 [2024-05-15 10:17:25.085659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:40.410 request: 00:23:40.410 { 00:23:40.410 "name": "TLSTEST", 00:23:40.410 "trtype": "tcp", 00:23:40.410 "traddr": "10.0.0.2", 00:23:40.410 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.410 "adrfam": "ipv4", 00:23:40.410 "trsvcid": "4420", 00:23:40.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.410 "psk": "/tmp/tmp.8Gp2Odtyp3", 00:23:40.410 "method": "bdev_nvme_attach_controller", 00:23:40.410 "req_id": 1 00:23:40.410 } 00:23:40.410 Got JSON-RPC error response 00:23:40.410 response: 00:23:40.410 { 00:23:40.410 "code": -32602, 00:23:40.410 "message": "Invalid parameters" 00:23:40.410 } 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2880128 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2880128 ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2880128 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2880128 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2880128' 00:23:40.410 killing process with pid 2880128 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2880128 00:23:40.410 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.410 00:23:40.410 Latency(us) 00:23:40.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.410 =================================================================================================================== 00:23:40.410 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.410 [2024-05-15 10:17:25.169335] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2880128 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Gp2Odtyp3 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Gp2Odtyp3 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Gp2Odtyp3 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8Gp2Odtyp3' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2880149 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2880149 /var/tmp/bdevperf.sock 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2880149 ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.410 [2024-05-15 10:17:25.315337] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:40.410 [2024-05-15 10:17:25.315390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880149 ] 00:23:40.410 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.410 [2024-05-15 10:17:25.365336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.410 [2024-05-15 10:17:25.391508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Gp2Odtyp3 00:23:40.410 [2024-05-15 10:17:25.606028] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.410 [2024-05-15 10:17:25.606088] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.410 [2024-05-15 10:17:25.614220] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:40.410 [2024-05-15 10:17:25.614237] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:40.410 [2024-05-15 10:17:25.614256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:40.410 [2024-05-15 10:17:25.614563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1a3e0 (107): Transport endpoint is not connected 00:23:40.410 [2024-05-15 10:17:25.615419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1a3e0 (9): Bad file descriptor 00:23:40.410 [2024-05-15 10:17:25.616420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:40.410 [2024-05-15 10:17:25.616428] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:40.410 [2024-05-15 10:17:25.616435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:40.410 request: 00:23:40.410 { 00:23:40.410 "name": "TLSTEST", 00:23:40.410 "trtype": "tcp", 00:23:40.410 "traddr": "10.0.0.2", 00:23:40.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.410 "adrfam": "ipv4", 00:23:40.410 "trsvcid": "4420", 00:23:40.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.410 "psk": "/tmp/tmp.8Gp2Odtyp3", 00:23:40.410 "method": "bdev_nvme_attach_controller", 00:23:40.410 "req_id": 1 00:23:40.410 } 00:23:40.410 Got JSON-RPC error response 00:23:40.410 response: 00:23:40.410 { 00:23:40.410 "code": -32602, 00:23:40.410 "message": "Invalid parameters" 00:23:40.410 } 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2880149 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2880149 ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2880149 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2880149 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2880149' 00:23:40.410 killing process with pid 2880149 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2880149 00:23:40.410 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.410 00:23:40.410 Latency(us) 00:23:40.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.410 =================================================================================================================== 00:23:40.410 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.410 [2024-05-15 10:17:25.701311] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2880149 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.410 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2880382 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2880382 /var/tmp/bdevperf.sock 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2880382 ']' 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:40.411 10:17:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.411 [2024-05-15 10:17:25.857466] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:40.411 [2024-05-15 10:17:25.857537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880382 ] 00:23:40.411 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.411 [2024-05-15 10:17:25.908652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.411 [2024-05-15 10:17:25.935389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:40.411 [2024-05-15 10:17:26.155570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:40.411 [2024-05-15 10:17:26.156716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4bad0 (9): Bad file descriptor 00:23:40.411 [2024-05-15 10:17:26.157714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:40.411 [2024-05-15 10:17:26.157721] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:40.411 [2024-05-15 10:17:26.157728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:40.411 request: 00:23:40.411 { 00:23:40.411 "name": "TLSTEST", 00:23:40.411 "trtype": "tcp", 00:23:40.411 "traddr": "10.0.0.2", 00:23:40.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.411 "adrfam": "ipv4", 00:23:40.411 "trsvcid": "4420", 00:23:40.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.411 "method": "bdev_nvme_attach_controller", 00:23:40.411 "req_id": 1 00:23:40.411 } 00:23:40.411 Got JSON-RPC error response 00:23:40.411 response: 00:23:40.411 { 00:23:40.411 "code": -32602, 00:23:40.411 "message": "Invalid parameters" 00:23:40.411 } 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2880382 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2880382 ']' 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2880382 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.411 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2880382 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2880382' 00:23:40.672 killing process with pid 2880382 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2880382 00:23:40.672 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.672 00:23:40.672 Latency(us) 00:23:40.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.672 =================================================================================================================== 00:23:40.672 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2880382 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2875056 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2875056 ']' 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2875056 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2875056 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2875056' 00:23:40.672 killing process with pid 2875056 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2875056 00:23:40.672 [2024-05-15 10:17:26.393442] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:40.672 [2024-05-15 10:17:26.393468] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.672 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2875056 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.GgiKPXKC3x 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.GgiKPXKC3x 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2880501 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2880501 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2880501 ']' 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:40.934 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.934 [2024-05-15 10:17:26.625358] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:40.934 [2024-05-15 10:17:26.625417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.934 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.934 [2024-05-15 10:17:26.707848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.195 [2024-05-15 10:17:26.736602] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.195 [2024-05-15 10:17:26.736636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.195 [2024-05-15 10:17:26.736641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.196 [2024-05-15 10:17:26.736646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.196 [2024-05-15 10:17:26.736650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.196 [2024-05-15 10:17:26.736668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.GgiKPXKC3x 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GgiKPXKC3x 00:23:41.196 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.196 [2024-05-15 10:17:26.984688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.457 10:17:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.457 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.718 [2024-05-15 10:17:27.277394] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:41.718 [2024-05-15 10:17:27.277432] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.718 [2024-05-15 10:17:27.277590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.718 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.718 malloc0 00:23:41.718 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:23:41.980 [2024-05-15 10:17:27.728344] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgiKPXKC3x 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GgiKPXKC3x' 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2880851 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2880851 /var/tmp/bdevperf.sock 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2880851 ']' 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:41.980 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.980 [2024-05-15 10:17:27.773916] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:41.980 [2024-05-15 10:17:27.773971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880851 ] 00:23:42.242 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.242 [2024-05-15 10:17:27.823960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.242 [2024-05-15 10:17:27.851798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.242 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:42.242 10:17:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:42.242 10:17:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:23:42.503 [2024-05-15 10:17:28.066150] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.503 [2024-05-15 10:17:28.066216] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:42.503 TLSTESTn1 00:23:42.503 10:17:28 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:42.503 Running I/O for 10 seconds... 00:23:52.543 00:23:52.543 Latency(us) 00:23:52.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.543 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:52.543 Verification LBA range: start 0x0 length 0x2000 00:23:52.543 TLSTESTn1 : 10.05 1495.73 5.84 0.00 0.00 85389.24 6062.08 115343.36 00:23:52.543 =================================================================================================================== 00:23:52.543 Total : 1495.73 5.84 0.00 0.00 85389.24 6062.08 115343.36 00:23:52.805 0 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2880851 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2880851 ']' 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2880851 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2880851 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2880851' 00:23:52.805 killing process with pid 2880851 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2880851 00:23:52.805 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.805 00:23:52.805 Latency(us) 00:23:52.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.805 =================================================================================================================== 00:23:52.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.805 [2024-05-15 10:17:38.416150] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2880851 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.GgiKPXKC3x 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgiKPXKC3x 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgiKPXKC3x 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GgiKPXKC3x 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GgiKPXKC3x' 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2882869 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2882869 /var/tmp/bdevperf.sock 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2882869 ']' 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:52.805 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.805 [2024-05-15 10:17:38.575956] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:52.805 [2024-05-15 10:17:38.576011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882869 ] 00:23:53.067 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.067 [2024-05-15 10:17:38.625989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.067 [2024-05-15 10:17:38.652497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.067 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:53.067 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:53.067 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:23:53.382 [2024-05-15 10:17:38.866916] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.382 [2024-05-15 10:17:38.866959] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:53.382 [2024-05-15 10:17:38.866964] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.GgiKPXKC3x 00:23:53.382 request: 00:23:53.382 { 00:23:53.382 "name": "TLSTEST", 00:23:53.383 "trtype": "tcp", 00:23:53.383 "traddr": "10.0.0.2", 00:23:53.383 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.383 "adrfam": "ipv4", 00:23:53.383 "trsvcid": "4420", 00:23:53.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.383 "psk": "/tmp/tmp.GgiKPXKC3x", 00:23:53.383 "method": "bdev_nvme_attach_controller", 00:23:53.383 "req_id": 1 00:23:53.383 } 00:23:53.383 Got JSON-RPC error response 00:23:53.383 response: 00:23:53.383 { 00:23:53.383 "code": -1, 00:23:53.383 "message": "Operation not permitted" 00:23:53.383 } 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2882869 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2882869 ']' 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2882869 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2882869 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2882869' 00:23:53.383 killing process with pid 2882869 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2882869 00:23:53.383 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.383 00:23:53.383 Latency(us) 00:23:53.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.383 =================================================================================================================== 00:23:53.383 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.383 10:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2882869 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2880501 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2880501 ']' 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2880501 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2880501 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2880501' 00:23:53.383 killing process with pid 2880501 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2880501 00:23:53.383 [2024-05-15 10:17:39.103646] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:53.383 [2024-05-15 10:17:39.103681] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:53.383 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2880501 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2882906 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2882906 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2882906 ']' 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:53.645 10:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.645 [2024-05-15 10:17:39.272560] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:53.645 [2024-05-15 10:17:39.272610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.645 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.645 [2024-05-15 10:17:39.353448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.645 [2024-05-15 10:17:39.380371] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.645 [2024-05-15 10:17:39.380405] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.645 [2024-05-15 10:17:39.380410] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.645 [2024-05-15 10:17:39.380415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.645 [2024-05-15 10:17:39.380419] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.645 [2024-05-15 10:17:39.380438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.GgiKPXKC3x 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.GgiKPXKC3x 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.GgiKPXKC3x 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GgiKPXKC3x 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.590 [2024-05-15 10:17:40.217155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.590 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.851 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.851 [2024-05-15 10:17:40.529896] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:54.851 [2024-05-15 10:17:40.529940] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.851 [2024-05-15 10:17:40.530116] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.851 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.113 malloc0 00:23:55.113 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:55.113 10:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:23:55.374 [2024-05-15 10:17:40.989016] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:55.374 [2024-05-15 10:17:40.989039] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:55.374 [2024-05-15 10:17:40.989065] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:55.374 request: 00:23:55.374 { 00:23:55.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.374 "host": "nqn.2016-06.io.spdk:host1", 00:23:55.374 "psk": "/tmp/tmp.GgiKPXKC3x", 00:23:55.374 "method": "nvmf_subsystem_add_host", 00:23:55.374 "req_id": 1 00:23:55.374 } 00:23:55.374 Got JSON-RPC error response 00:23:55.374 response: 00:23:55.374 { 00:23:55.374 "code": -32603, 00:23:55.374 "message": "Internal error" 00:23:55.374 } 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2882906 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2882906 ']' 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2882906 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2882906 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2882906' 00:23:55.374 killing process with pid 2882906 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2882906 00:23:55.374 [2024-05-15 10:17:41.063941] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:55.374 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2882906 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.GgiKPXKC3x 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2883403 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2883403 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2883403 ']' 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:55.636 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.636 [2024-05-15 10:17:41.233536] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:55.636 [2024-05-15 10:17:41.233590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.636 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.636 [2024-05-15 10:17:41.314237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.636 [2024-05-15 10:17:41.342760] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.636 [2024-05-15 10:17:41.342798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.636 [2024-05-15 10:17:41.342803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.636 [2024-05-15 10:17:41.342808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.636 [2024-05-15 10:17:41.342812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.636 [2024-05-15 10:17:41.342833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.210 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:56.210 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:56.210 10:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.210 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:56.210 10:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.471 10:17:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.471 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.GgiKPXKC3x 00:23:56.472 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GgiKPXKC3x 00:23:56.472 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:56.472 [2024-05-15 10:17:42.172509] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.472 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:56.733 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:56.733 [2024-05-15 10:17:42.465214] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:56.733 [2024-05-15 10:17:42.465250] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.733 [2024-05-15 10:17:42.465411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.733 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:56.994 malloc0 00:23:56.994 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.994 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:23:57.256 [2024-05-15 10:17:42.888150] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2883751 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2883751 /var/tmp/bdevperf.sock 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2883751 ']' 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.256 10:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:57.257 10:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.257 10:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:57.257 10:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.257 [2024-05-15 10:17:42.932768] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:57.257 [2024-05-15 10:17:42.932818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883751 ] 00:23:57.257 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.257 [2024-05-15 10:17:42.982844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.257 [2024-05-15 10:17:43.010869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.517 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:57.517 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:57.517 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:23:57.517 [2024-05-15 10:17:43.225375] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.517 [2024-05-15 10:17:43.225441] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:57.778 TLSTESTn1 00:23:57.778 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:58.041 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:58.041 "subsystems": [ 00:23:58.041 { 00:23:58.041 "subsystem": "keyring", 00:23:58.041 "config": [] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "iobuf", 00:23:58.041 "config": [ 00:23:58.041 { 00:23:58.041 "method": "iobuf_set_options", 00:23:58.041 "params": { 00:23:58.041 "small_pool_count": 8192, 00:23:58.041 "large_pool_count": 1024, 00:23:58.041 "small_bufsize": 8192, 00:23:58.041 "large_bufsize": 135168 00:23:58.041 } 00:23:58.041 } 00:23:58.041 ] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "sock", 00:23:58.041 "config": [ 00:23:58.041 { 00:23:58.041 "method": "sock_impl_set_options", 00:23:58.041 "params": { 00:23:58.041 "impl_name": "posix", 00:23:58.041 "recv_buf_size": 2097152, 00:23:58.041 "send_buf_size": 2097152, 00:23:58.041 "enable_recv_pipe": true, 00:23:58.041 "enable_quickack": false, 00:23:58.041 "enable_placement_id": 0, 00:23:58.041 "enable_zerocopy_send_server": true, 00:23:58.041 "enable_zerocopy_send_client": false, 00:23:58.041 "zerocopy_threshold": 0, 00:23:58.041 "tls_version": 0, 00:23:58.041 "enable_ktls": false 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "sock_impl_set_options", 00:23:58.041 "params": { 00:23:58.041 "impl_name": "ssl", 00:23:58.041 "recv_buf_size": 4096, 00:23:58.041 "send_buf_size": 4096, 00:23:58.041 "enable_recv_pipe": true, 00:23:58.041 "enable_quickack": false, 00:23:58.041 "enable_placement_id": 0, 00:23:58.041 "enable_zerocopy_send_server": true, 00:23:58.041 "enable_zerocopy_send_client": false, 00:23:58.041 "zerocopy_threshold": 0, 00:23:58.041 "tls_version": 0, 00:23:58.041 "enable_ktls": false 00:23:58.041 } 00:23:58.041 } 00:23:58.041 ] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "vmd", 00:23:58.041 "config": [] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "accel", 00:23:58.041 "config": [ 00:23:58.041 { 00:23:58.041 "method": "accel_set_options", 00:23:58.041 "params": { 00:23:58.041 "small_cache_size": 128, 00:23:58.041 "large_cache_size": 16, 00:23:58.041 "task_count": 2048, 00:23:58.041 "sequence_count": 2048, 00:23:58.041 "buf_count": 2048 00:23:58.041 } 00:23:58.041 } 00:23:58.041 ] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "bdev", 00:23:58.041 "config": [ 00:23:58.041 { 00:23:58.041 "method": "bdev_set_options", 00:23:58.041 "params": { 00:23:58.041 "bdev_io_pool_size": 65535, 00:23:58.041 "bdev_io_cache_size": 256, 00:23:58.041 "bdev_auto_examine": true, 00:23:58.041 "iobuf_small_cache_size": 128, 00:23:58.041 "iobuf_large_cache_size": 16 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "bdev_raid_set_options", 00:23:58.041 "params": { 00:23:58.041 "process_window_size_kb": 1024 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "bdev_iscsi_set_options", 00:23:58.041 "params": { 00:23:58.041 "timeout_sec": 30 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "bdev_nvme_set_options", 00:23:58.041 "params": { 00:23:58.041 "action_on_timeout": "none", 00:23:58.041 "timeout_us": 0, 00:23:58.041 "timeout_admin_us": 0, 00:23:58.041 "keep_alive_timeout_ms": 10000, 00:23:58.041 "arbitration_burst": 0, 00:23:58.041 "low_priority_weight": 0, 00:23:58.041 "medium_priority_weight": 0, 00:23:58.041 "high_priority_weight": 0, 00:23:58.041 "nvme_adminq_poll_period_us": 10000, 00:23:58.041 "nvme_ioq_poll_period_us": 0, 00:23:58.041 "io_queue_requests": 0, 00:23:58.041 "delay_cmd_submit": true, 00:23:58.041 "transport_retry_count": 4, 00:23:58.041 "bdev_retry_count": 3, 00:23:58.041 "transport_ack_timeout": 0, 00:23:58.041 "ctrlr_loss_timeout_sec": 0, 00:23:58.041 "reconnect_delay_sec": 0, 00:23:58.041 "fast_io_fail_timeout_sec": 0, 00:23:58.041 "disable_auto_failback": false, 00:23:58.041 "generate_uuids": false, 00:23:58.041 "transport_tos": 0, 00:23:58.041 "nvme_error_stat": false, 00:23:58.041 "rdma_srq_size": 0, 00:23:58.041 "io_path_stat": false, 00:23:58.041 "allow_accel_sequence": false, 00:23:58.041 "rdma_max_cq_size": 0, 00:23:58.041 "rdma_cm_event_timeout_ms": 0, 00:23:58.041 "dhchap_digests": [ 00:23:58.041 "sha256", 00:23:58.041 "sha384", 00:23:58.041 "sha512" 00:23:58.041 ], 00:23:58.041 "dhchap_dhgroups": [ 00:23:58.041 "null", 00:23:58.041 "ffdhe2048", 00:23:58.041 "ffdhe3072", 00:23:58.041 "ffdhe4096", 00:23:58.041 "ffdhe6144", 00:23:58.041 "ffdhe8192" 00:23:58.041 ] 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "bdev_nvme_set_hotplug", 00:23:58.041 "params": { 00:23:58.041 "period_us": 100000, 00:23:58.041 "enable": false 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "bdev_malloc_create", 00:23:58.041 "params": { 00:23:58.041 "name": "malloc0", 00:23:58.041 "num_blocks": 8192, 00:23:58.041 "block_size": 4096, 00:23:58.041 "physical_block_size": 4096, 00:23:58.041 "uuid": "b4288556-74d1-4568-9cd4-b15e20a53c37", 00:23:58.041 "optimal_io_boundary": 0 00:23:58.041 } 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "method": "bdev_wait_for_examine" 00:23:58.041 } 00:23:58.041 ] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "nbd", 00:23:58.041 "config": [] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "scheduler", 00:23:58.041 "config": [ 00:23:58.041 { 00:23:58.041 "method": "framework_set_scheduler", 00:23:58.041 "params": { 00:23:58.041 "name": "static" 00:23:58.041 } 00:23:58.041 } 00:23:58.041 ] 00:23:58.041 }, 00:23:58.041 { 00:23:58.041 "subsystem": "nvmf", 00:23:58.041 "config": [ 00:23:58.041 { 00:23:58.041 "method": "nvmf_set_config", 00:23:58.041 "params": { 00:23:58.041 "discovery_filter": "match_any", 00:23:58.041 "admin_cmd_passthru": { 00:23:58.041 "identify_ctrlr": false 00:23:58.041 } 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_set_max_subsystems", 00:23:58.042 "params": { 00:23:58.042 "max_subsystems": 1024 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_set_crdt", 00:23:58.042 "params": { 00:23:58.042 "crdt1": 0, 00:23:58.042 "crdt2": 0, 00:23:58.042 "crdt3": 0 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_create_transport", 00:23:58.042 "params": { 00:23:58.042 "trtype": "TCP", 00:23:58.042 "max_queue_depth": 128, 00:23:58.042 "max_io_qpairs_per_ctrlr": 127, 00:23:58.042 "in_capsule_data_size": 4096, 00:23:58.042 "max_io_size": 131072, 00:23:58.042 "io_unit_size": 131072, 00:23:58.042 "max_aq_depth": 128, 00:23:58.042 "num_shared_buffers": 511, 00:23:58.042 "buf_cache_size": 4294967295, 00:23:58.042 "dif_insert_or_strip": false, 00:23:58.042 "zcopy": false, 00:23:58.042 "c2h_success": false, 00:23:58.042 "sock_priority": 0, 00:23:58.042 "abort_timeout_sec": 1, 00:23:58.042 "ack_timeout": 0, 00:23:58.042 "data_wr_pool_size": 0 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_create_subsystem", 00:23:58.042 "params": { 00:23:58.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.042 "allow_any_host": false, 00:23:58.042 "serial_number": "SPDK00000000000001", 00:23:58.042 "model_number": "SPDK bdev Controller", 00:23:58.042 "max_namespaces": 10, 00:23:58.042 "min_cntlid": 1, 00:23:58.042 "max_cntlid": 65519, 00:23:58.042 "ana_reporting": false 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_subsystem_add_host", 00:23:58.042 "params": { 00:23:58.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.042 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.042 "psk": "/tmp/tmp.GgiKPXKC3x" 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_subsystem_add_ns", 00:23:58.042 "params": { 00:23:58.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.042 "namespace": { 00:23:58.042 "nsid": 1, 00:23:58.042 "bdev_name": "malloc0", 00:23:58.042 "nguid": "B428855674D145689CD4B15E20A53C37", 00:23:58.042 "uuid": "b4288556-74d1-4568-9cd4-b15e20a53c37", 00:23:58.042 "no_auto_visible": false 00:23:58.042 } 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "nvmf_subsystem_add_listener", 00:23:58.042 "params": { 00:23:58.042 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.042 "listen_address": { 00:23:58.042 "trtype": "TCP", 00:23:58.042 "adrfam": "IPv4", 00:23:58.042 "traddr": "10.0.0.2", 00:23:58.042 "trsvcid": "4420" 00:23:58.042 }, 00:23:58.042 "secure_channel": true 00:23:58.042 } 00:23:58.042 } 00:23:58.042 ] 00:23:58.042 } 00:23:58.042 ] 00:23:58.042 }' 00:23:58.042 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:58.042 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:58.042 "subsystems": [ 00:23:58.042 { 00:23:58.042 "subsystem": "keyring", 00:23:58.042 "config": [] 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "subsystem": "iobuf", 00:23:58.042 "config": [ 00:23:58.042 { 00:23:58.042 "method": "iobuf_set_options", 00:23:58.042 "params": { 00:23:58.042 "small_pool_count": 8192, 00:23:58.042 "large_pool_count": 1024, 00:23:58.042 "small_bufsize": 8192, 00:23:58.042 "large_bufsize": 135168 00:23:58.042 } 00:23:58.042 } 00:23:58.042 ] 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "subsystem": "sock", 00:23:58.042 "config": [ 00:23:58.042 { 00:23:58.042 "method": "sock_impl_set_options", 00:23:58.042 "params": { 00:23:58.042 "impl_name": "posix", 00:23:58.042 "recv_buf_size": 2097152, 00:23:58.042 "send_buf_size": 2097152, 00:23:58.042 "enable_recv_pipe": true, 00:23:58.042 "enable_quickack": false, 00:23:58.042 "enable_placement_id": 0, 00:23:58.042 "enable_zerocopy_send_server": true, 00:23:58.042 "enable_zerocopy_send_client": false, 00:23:58.042 "zerocopy_threshold": 0, 00:23:58.042 "tls_version": 0, 00:23:58.042 "enable_ktls": false 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "sock_impl_set_options", 00:23:58.042 "params": { 00:23:58.042 "impl_name": "ssl", 00:23:58.042 "recv_buf_size": 4096, 00:23:58.042 "send_buf_size": 4096, 00:23:58.042 "enable_recv_pipe": true, 00:23:58.042 "enable_quickack": false, 00:23:58.042 "enable_placement_id": 0, 00:23:58.042 "enable_zerocopy_send_server": true, 00:23:58.042 "enable_zerocopy_send_client": false, 00:23:58.042 "zerocopy_threshold": 0, 00:23:58.042 "tls_version": 0, 00:23:58.042 "enable_ktls": false 00:23:58.042 } 00:23:58.042 } 00:23:58.042 ] 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "subsystem": "vmd", 00:23:58.042 "config": [] 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "subsystem": "accel", 00:23:58.042 "config": [ 00:23:58.042 { 00:23:58.042 "method": "accel_set_options", 00:23:58.042 "params": { 00:23:58.042 "small_cache_size": 128, 00:23:58.042 "large_cache_size": 16, 00:23:58.042 "task_count": 2048, 00:23:58.042 "sequence_count": 2048, 00:23:58.042 "buf_count": 2048 00:23:58.042 } 00:23:58.042 } 00:23:58.042 ] 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "subsystem": "bdev", 00:23:58.042 "config": [ 00:23:58.042 { 00:23:58.042 "method": "bdev_set_options", 00:23:58.042 "params": { 00:23:58.042 "bdev_io_pool_size": 65535, 00:23:58.042 "bdev_io_cache_size": 256, 00:23:58.042 "bdev_auto_examine": true, 00:23:58.042 "iobuf_small_cache_size": 128, 00:23:58.042 "iobuf_large_cache_size": 16 00:23:58.042 } 00:23:58.042 }, 00:23:58.042 { 00:23:58.042 "method": "bdev_raid_set_options", 00:23:58.042 "params": { 00:23:58.042 "process_window_size_kb": 1024 00:23:58.042 } 00:23:58.043 }, 00:23:58.043 { 00:23:58.043 "method": "bdev_iscsi_set_options", 00:23:58.043 "params": { 00:23:58.043 "timeout_sec": 30 00:23:58.043 } 00:23:58.043 }, 00:23:58.043 { 00:23:58.043 "method": "bdev_nvme_set_options", 00:23:58.043 "params": { 00:23:58.043 "action_on_timeout": "none", 00:23:58.043 "timeout_us": 0, 00:23:58.043 "timeout_admin_us": 0, 00:23:58.043 "keep_alive_timeout_ms": 10000, 00:23:58.043 "arbitration_burst": 0, 00:23:58.043 "low_priority_weight": 0, 00:23:58.043 "medium_priority_weight": 0, 00:23:58.043 "high_priority_weight": 0, 00:23:58.043 "nvme_adminq_poll_period_us": 10000, 00:23:58.043 "nvme_ioq_poll_period_us": 0, 00:23:58.043 "io_queue_requests": 512, 00:23:58.043 "delay_cmd_submit": true, 00:23:58.043 "transport_retry_count": 4, 00:23:58.043 "bdev_retry_count": 3, 00:23:58.043 "transport_ack_timeout": 0, 00:23:58.043 "ctrlr_loss_timeout_sec": 0, 00:23:58.043 "reconnect_delay_sec": 0, 00:23:58.043 "fast_io_fail_timeout_sec": 0, 00:23:58.043 "disable_auto_failback": false, 00:23:58.043 "generate_uuids": false, 00:23:58.043 "transport_tos": 0, 00:23:58.043 "nvme_error_stat": false, 00:23:58.043 "rdma_srq_size": 0, 00:23:58.043 "io_path_stat": false, 00:23:58.043 "allow_accel_sequence": false, 00:23:58.043 "rdma_max_cq_size": 0, 00:23:58.043 "rdma_cm_event_timeout_ms": 0, 00:23:58.043 "dhchap_digests": [ 00:23:58.043 "sha256", 00:23:58.043 "sha384", 00:23:58.043 "sha512" 00:23:58.043 ], 00:23:58.043 "dhchap_dhgroups": [ 00:23:58.043 "null", 00:23:58.043 "ffdhe2048", 00:23:58.043 "ffdhe3072", 00:23:58.043 "ffdhe4096", 00:23:58.043 "ffdhe6144", 00:23:58.043 "ffdhe8192" 00:23:58.043 ] 00:23:58.043 } 00:23:58.043 }, 00:23:58.043 { 00:23:58.043 "method": "bdev_nvme_attach_controller", 00:23:58.043 "params": { 00:23:58.043 "name": "TLSTEST", 00:23:58.043 "trtype": "TCP", 00:23:58.043 "adrfam": "IPv4", 00:23:58.043 "traddr": "10.0.0.2", 00:23:58.043 "trsvcid": "4420", 00:23:58.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.043 "prchk_reftag": false, 00:23:58.043 "prchk_guard": false, 00:23:58.043 "ctrlr_loss_timeout_sec": 0, 00:23:58.043 "reconnect_delay_sec": 0, 00:23:58.043 "fast_io_fail_timeout_sec": 0, 00:23:58.043 "psk": "/tmp/tmp.GgiKPXKC3x", 00:23:58.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.043 "hdgst": false, 00:23:58.043 "ddgst": false 00:23:58.043 } 00:23:58.043 }, 00:23:58.043 { 00:23:58.043 "method": "bdev_nvme_set_hotplug", 00:23:58.043 "params": { 00:23:58.043 "period_us": 100000, 00:23:58.043 "enable": false 00:23:58.043 } 00:23:58.043 }, 00:23:58.043 { 00:23:58.043 "method": "bdev_wait_for_examine" 00:23:58.043 } 00:23:58.043 ] 00:23:58.043 }, 00:23:58.043 { 00:23:58.043 "subsystem": "nbd", 00:23:58.043 "config": [] 00:23:58.043 } 00:23:58.043 ] 00:23:58.043 }' 00:23:58.043 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2883751 00:23:58.043 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2883751 ']' 00:23:58.043 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2883751 00:23:58.043 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:58.043 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:58.043 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2883751 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2883751' 00:23:58.305 killing process with pid 2883751 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2883751 00:23:58.305 Received shutdown signal, test time was about 10.000000 seconds 00:23:58.305 00:23:58.305 Latency(us) 00:23:58.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.305 =================================================================================================================== 00:23:58.305 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:58.305 [2024-05-15 10:17:43.878318] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2883751 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2883403 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2883403 ']' 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2883403 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:58.305 10:17:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2883403 00:23:58.305 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:58.305 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:58.306 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2883403' 00:23:58.306 killing process with pid 2883403 00:23:58.306 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2883403 00:23:58.306 [2024-05-15 10:17:44.035169] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:58.306 [2024-05-15 10:17:44.035206] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.306 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2883403 00:23:58.568 10:17:44 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:58.568 10:17:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.568 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:58.568 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.568 10:17:44 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:58.568 "subsystems": [ 00:23:58.568 { 00:23:58.568 "subsystem": "keyring", 00:23:58.568 "config": [] 00:23:58.568 }, 00:23:58.568 { 00:23:58.568 "subsystem": "iobuf", 00:23:58.568 "config": [ 00:23:58.568 { 00:23:58.569 "method": "iobuf_set_options", 00:23:58.569 "params": { 00:23:58.569 "small_pool_count": 8192, 00:23:58.569 "large_pool_count": 1024, 00:23:58.569 "small_bufsize": 8192, 00:23:58.569 "large_bufsize": 135168 00:23:58.569 } 00:23:58.569 } 00:23:58.569 ] 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "subsystem": "sock", 00:23:58.569 "config": [ 00:23:58.569 { 00:23:58.569 "method": "sock_impl_set_options", 00:23:58.569 "params": { 00:23:58.569 "impl_name": "posix", 00:23:58.569 "recv_buf_size": 2097152, 00:23:58.569 "send_buf_size": 2097152, 00:23:58.569 "enable_recv_pipe": true, 00:23:58.569 "enable_quickack": false, 00:23:58.569 "enable_placement_id": 0, 00:23:58.569 "enable_zerocopy_send_server": true, 00:23:58.569 "enable_zerocopy_send_client": false, 00:23:58.569 "zerocopy_threshold": 0, 00:23:58.569 "tls_version": 0, 00:23:58.569 "enable_ktls": false 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "sock_impl_set_options", 00:23:58.569 "params": { 00:23:58.569 "impl_name": "ssl", 00:23:58.569 "recv_buf_size": 4096, 00:23:58.569 "send_buf_size": 4096, 00:23:58.569 "enable_recv_pipe": true, 00:23:58.569 "enable_quickack": false, 00:23:58.569 "enable_placement_id": 0, 00:23:58.569 "enable_zerocopy_send_server": true, 00:23:58.569 "enable_zerocopy_send_client": false, 00:23:58.569 "zerocopy_threshold": 0, 00:23:58.569 "tls_version": 0, 00:23:58.569 "enable_ktls": false 00:23:58.569 } 00:23:58.569 } 00:23:58.569 ] 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "subsystem": "vmd", 00:23:58.569 "config": [] 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "subsystem": "accel", 00:23:58.569 "config": [ 00:23:58.569 { 00:23:58.569 "method": "accel_set_options", 00:23:58.569 "params": { 00:23:58.569 "small_cache_size": 128, 00:23:58.569 "large_cache_size": 16, 00:23:58.569 "task_count": 2048, 00:23:58.569 "sequence_count": 2048, 00:23:58.569 "buf_count": 2048 00:23:58.569 } 00:23:58.569 } 00:23:58.569 ] 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "subsystem": "bdev", 00:23:58.569 "config": [ 00:23:58.569 { 00:23:58.569 "method": "bdev_set_options", 00:23:58.569 "params": { 00:23:58.569 "bdev_io_pool_size": 65535, 00:23:58.569 "bdev_io_cache_size": 256, 00:23:58.569 "bdev_auto_examine": true, 00:23:58.569 "iobuf_small_cache_size": 128, 00:23:58.569 "iobuf_large_cache_size": 16 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "bdev_raid_set_options", 00:23:58.569 "params": { 00:23:58.569 "process_window_size_kb": 1024 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "bdev_iscsi_set_options", 00:23:58.569 "params": { 00:23:58.569 "timeout_sec": 30 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "bdev_nvme_set_options", 00:23:58.569 "params": { 00:23:58.569 "action_on_timeout": "none", 00:23:58.569 "timeout_us": 0, 00:23:58.569 "timeout_admin_us": 0, 00:23:58.569 "keep_alive_timeout_ms": 10000, 00:23:58.569 "arbitration_burst": 0, 00:23:58.569 "low_priority_weight": 0, 00:23:58.569 "medium_priority_weight": 0, 00:23:58.569 "high_priority_weight": 0, 00:23:58.569 "nvme_adminq_poll_period_us": 10000, 00:23:58.569 "nvme_ioq_poll_period_us": 0, 00:23:58.569 "io_queue_requests": 0, 00:23:58.569 "delay_cmd_submit": true, 00:23:58.569 "transport_retry_count": 4, 00:23:58.569 "bdev_retry_count": 3, 00:23:58.569 "transport_ack_timeout": 0, 00:23:58.569 "ctrlr_loss_timeout_sec": 0, 00:23:58.569 "reconnect_delay_sec": 0, 00:23:58.569 "fast_io_fail_timeout_sec": 0, 00:23:58.569 "disable_auto_failback": false, 00:23:58.569 "generate_uuids": false, 00:23:58.569 "transport_tos": 0, 00:23:58.569 "nvme_error_stat": false, 00:23:58.569 "rdma_srq_size": 0, 00:23:58.569 "io_path_stat": false, 00:23:58.569 "allow_accel_sequence": false, 00:23:58.569 "rdma_max_cq_size": 0, 00:23:58.569 "rdma_cm_event_timeout_ms": 0, 00:23:58.569 "dhchap_digests": [ 00:23:58.569 "sha256", 00:23:58.569 "sha384", 00:23:58.569 "sha512" 00:23:58.569 ], 00:23:58.569 "dhchap_dhgroups": [ 00:23:58.569 "null", 00:23:58.569 "ffdhe2048", 00:23:58.569 "ffdhe3072", 00:23:58.569 "ffdhe4096", 00:23:58.569 "ffdhe6144", 00:23:58.569 "ffdhe8192" 00:23:58.569 ] 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "bdev_nvme_set_hotplug", 00:23:58.569 "params": { 00:23:58.569 "period_us": 100000, 00:23:58.569 "enable": false 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "bdev_malloc_create", 00:23:58.569 "params": { 00:23:58.569 "name": "malloc0", 00:23:58.569 "num_blocks": 8192, 00:23:58.569 "block_size": 4096, 00:23:58.569 "physical_block_size": 4096, 00:23:58.569 "uuid": "b4288556-74d1-4568-9cd4-b15e20a53c37", 00:23:58.569 "optimal_io_boundary": 0 00:23:58.569 } 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "method": "bdev_wait_for_examine" 00:23:58.569 } 00:23:58.569 ] 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "subsystem": "nbd", 00:23:58.569 "config": [] 00:23:58.569 }, 00:23:58.569 { 00:23:58.569 "subsystem": "scheduler", 00:23:58.569 "config": [ 00:23:58.570 { 00:23:58.570 "method": "framework_set_scheduler", 00:23:58.570 "params": { 00:23:58.570 "name": "static" 00:23:58.570 } 00:23:58.570 } 00:23:58.570 ] 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "subsystem": "nvmf", 00:23:58.570 "config": [ 00:23:58.570 { 00:23:58.570 "method": "nvmf_set_config", 00:23:58.570 "params": { 00:23:58.570 "discovery_filter": "match_any", 00:23:58.570 "admin_cmd_passthru": { 00:23:58.570 "identify_ctrlr": false 00:23:58.570 } 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_set_max_subsystems", 00:23:58.570 "params": { 00:23:58.570 "max_subsystems": 1024 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_set_crdt", 00:23:58.570 "params": { 00:23:58.570 "crdt1": 0, 00:23:58.570 "crdt2": 0, 00:23:58.570 "crdt3": 0 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_create_transport", 00:23:58.570 "params": { 00:23:58.570 "trtype": "TCP", 00:23:58.570 "max_queue_depth": 128, 00:23:58.570 "max_io_qpairs_per_ctrlr": 127, 00:23:58.570 "in_capsule_data_size": 4096, 00:23:58.570 "max_io_size": 131072, 00:23:58.570 "io_unit_size": 131072, 00:23:58.570 "max_aq_depth": 128, 00:23:58.570 "num_shared_buffers": 511, 00:23:58.570 "buf_cache_size": 4294967295, 00:23:58.570 "dif_insert_or_strip": false, 00:23:58.570 "zcopy": false, 00:23:58.570 "c2h_success": false, 00:23:58.570 "sock_priority": 0, 00:23:58.570 "abort_timeout_sec": 1, 00:23:58.570 "ack_timeout": 0, 00:23:58.570 "data_wr_pool_size": 0 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_create_subsystem", 00:23:58.570 "params": { 00:23:58.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.570 "allow_any_host": false, 00:23:58.570 "serial_number": "SPDK00000000000001", 00:23:58.570 "model_number": "SPDK bdev Controller", 00:23:58.570 "max_namespaces": 10, 00:23:58.570 "min_cntlid": 1, 00:23:58.570 "max_cntlid": 65519, 00:23:58.570 "ana_reporting": false 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_subsystem_add_host", 00:23:58.570 "params": { 00:23:58.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.570 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.570 "psk": "/tmp/tmp.GgiKPXKC3x" 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_subsystem_add_ns", 00:23:58.570 "params": { 00:23:58.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.570 "namespace": { 00:23:58.570 "nsid": 1, 00:23:58.570 "bdev_name": "malloc0", 00:23:58.570 "nguid": "B428855674D145689CD4B15E20A53C37", 00:23:58.570 "uuid": "b4288556-74d1-4568-9cd4-b15e20a53c37", 00:23:58.570 "no_auto_visible": false 00:23:58.570 } 00:23:58.570 } 00:23:58.570 }, 00:23:58.570 { 00:23:58.570 "method": "nvmf_subsystem_add_listener", 00:23:58.570 "params": { 00:23:58.570 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.570 "listen_address": { 00:23:58.570 "trtype": "TCP", 00:23:58.570 "adrfam": "IPv4", 00:23:58.570 "traddr": "10.0.0.2", 00:23:58.570 "trsvcid": "4420" 00:23:58.570 }, 00:23:58.570 "secure_channel": true 00:23:58.570 } 00:23:58.570 } 00:23:58.570 ] 00:23:58.570 } 00:23:58.570 ] 00:23:58.570 }' 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2883965 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2883965 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2883965 ']' 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:58.570 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.570 [2024-05-15 10:17:44.208383] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:58.570 [2024-05-15 10:17:44.208435] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.570 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.570 [2024-05-15 10:17:44.288024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.570 [2024-05-15 10:17:44.313986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.570 [2024-05-15 10:17:44.314020] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.570 [2024-05-15 10:17:44.314026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.570 [2024-05-15 10:17:44.314031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.570 [2024-05-15 10:17:44.314035] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.570 [2024-05-15 10:17:44.314084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.833 [2024-05-15 10:17:44.483948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.833 [2024-05-15 10:17:44.499914] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:58.833 [2024-05-15 10:17:44.515946] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:58.833 [2024-05-15 10:17:44.515981] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:58.833 [2024-05-15 10:17:44.530442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.407 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:59.407 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:59.407 10:17:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.407 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:59.407 10:17:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2884311 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2884311 /var/tmp/bdevperf.sock 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2884311 ']' 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:59.407 10:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:59.407 "subsystems": [ 00:23:59.407 { 00:23:59.407 "subsystem": "keyring", 00:23:59.407 "config": [] 00:23:59.407 }, 00:23:59.407 { 00:23:59.407 "subsystem": "iobuf", 00:23:59.407 "config": [ 00:23:59.407 { 00:23:59.407 "method": "iobuf_set_options", 00:23:59.407 "params": { 00:23:59.407 "small_pool_count": 8192, 00:23:59.407 "large_pool_count": 1024, 00:23:59.407 "small_bufsize": 8192, 00:23:59.407 "large_bufsize": 135168 00:23:59.407 } 00:23:59.407 } 00:23:59.407 ] 00:23:59.407 }, 00:23:59.407 { 00:23:59.407 "subsystem": "sock", 00:23:59.407 "config": [ 00:23:59.407 { 00:23:59.407 "method": "sock_impl_set_options", 00:23:59.407 "params": { 00:23:59.408 "impl_name": "posix", 00:23:59.408 "recv_buf_size": 2097152, 00:23:59.408 "send_buf_size": 2097152, 00:23:59.408 "enable_recv_pipe": true, 00:23:59.408 "enable_quickack": false, 00:23:59.408 "enable_placement_id": 0, 00:23:59.408 "enable_zerocopy_send_server": true, 00:23:59.408 "enable_zerocopy_send_client": false, 00:23:59.408 "zerocopy_threshold": 0, 00:23:59.408 "tls_version": 0, 00:23:59.408 "enable_ktls": false 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "sock_impl_set_options", 00:23:59.408 "params": { 00:23:59.408 "impl_name": "ssl", 00:23:59.408 "recv_buf_size": 4096, 00:23:59.408 "send_buf_size": 4096, 00:23:59.408 "enable_recv_pipe": true, 00:23:59.408 "enable_quickack": false, 00:23:59.408 "enable_placement_id": 0, 00:23:59.408 "enable_zerocopy_send_server": true, 00:23:59.408 "enable_zerocopy_send_client": false, 00:23:59.408 "zerocopy_threshold": 0, 00:23:59.408 "tls_version": 0, 00:23:59.408 "enable_ktls": false 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "vmd", 00:23:59.408 "config": [] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "accel", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "accel_set_options", 00:23:59.408 "params": { 00:23:59.408 "small_cache_size": 128, 00:23:59.408 "large_cache_size": 16, 00:23:59.408 "task_count": 2048, 00:23:59.408 "sequence_count": 2048, 00:23:59.408 "buf_count": 2048 00:23:59.408 } 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "bdev", 00:23:59.408 "config": [ 00:23:59.408 { 00:23:59.408 "method": "bdev_set_options", 00:23:59.408 "params": { 00:23:59.408 "bdev_io_pool_size": 65535, 00:23:59.408 "bdev_io_cache_size": 256, 00:23:59.408 "bdev_auto_examine": true, 00:23:59.408 "iobuf_small_cache_size": 128, 00:23:59.408 "iobuf_large_cache_size": 16 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "bdev_raid_set_options", 00:23:59.408 "params": { 00:23:59.408 "process_window_size_kb": 1024 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "bdev_iscsi_set_options", 00:23:59.408 "params": { 00:23:59.408 "timeout_sec": 30 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "bdev_nvme_set_options", 00:23:59.408 "params": { 00:23:59.408 "action_on_timeout": "none", 00:23:59.408 "timeout_us": 0, 00:23:59.408 "timeout_admin_us": 0, 00:23:59.408 "keep_alive_timeout_ms": 10000, 00:23:59.408 "arbitration_burst": 0, 00:23:59.408 "low_priority_weight": 0, 00:23:59.408 "medium_priority_weight": 0, 00:23:59.408 "high_priority_weight": 0, 00:23:59.408 "nvme_adminq_poll_period_us": 10000, 00:23:59.408 "nvme_ioq_poll_period_us": 0, 00:23:59.408 "io_queue_requests": 512, 00:23:59.408 "delay_cmd_submit": true, 00:23:59.408 "transport_retry_count": 4, 00:23:59.408 "bdev_retry_count": 3, 00:23:59.408 "transport_ack_timeout": 0, 00:23:59.408 "ctrlr_loss_timeout_sec": 0, 00:23:59.408 "reconnect_delay_sec": 0, 00:23:59.408 "fast_io_fail_timeout_sec": 0, 00:23:59.408 "disable_auto_failback": false, 00:23:59.408 "generate_uuids": false, 00:23:59.408 "transport_tos": 0, 00:23:59.408 "nvme_error_stat": false, 00:23:59.408 "rdma_srq_size": 0, 00:23:59.408 "io_path_stat": false, 00:23:59.408 "allow_accel_sequence": false, 00:23:59.408 "rdma_max_cq_size": 0, 00:23:59.408 "rdma_cm_event_timeout_ms": 0, 00:23:59.408 "dhchap_digests": [ 00:23:59.408 "sha256", 00:23:59.408 "sha384", 00:23:59.408 "sha512" 00:23:59.408 ], 00:23:59.408 "dhchap_dhgroups": [ 00:23:59.408 "null", 00:23:59.408 "ffdhe2048", 00:23:59.408 "ffdhe3072", 00:23:59.408 "ffdhe4096", 00:23:59.408 "ffdhe6144", 00:23:59.408 "ffdhe8192" 00:23:59.408 ] 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "bdev_nvme_attach_controller", 00:23:59.408 "params": { 00:23:59.408 "name": "TLSTEST", 00:23:59.408 "trtype": "TCP", 00:23:59.408 "adrfam": "IPv4", 00:23:59.408 "traddr": "10.0.0.2", 00:23:59.408 "trsvcid": "4420", 00:23:59.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.408 "prchk_reftag": false, 00:23:59.408 "prchk_guard": false, 00:23:59.408 "ctrlr_loss_timeout_sec": 0, 00:23:59.408 "reconnect_delay_sec": 0, 00:23:59.408 "fast_io_fail_timeout_sec": 0, 00:23:59.408 "psk": "/tmp/tmp.GgiKPXKC3x", 00:23:59.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.408 "hdgst": false, 00:23:59.408 "ddgst": false 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "bdev_nvme_set_hotplug", 00:23:59.408 "params": { 00:23:59.408 "period_us": 100000, 00:23:59.408 "enable": false 00:23:59.408 } 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "method": "bdev_wait_for_examine" 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }, 00:23:59.408 { 00:23:59.408 "subsystem": "nbd", 00:23:59.408 "config": [] 00:23:59.408 } 00:23:59.408 ] 00:23:59.408 }' 00:23:59.408 [2024-05-15 10:17:45.052931] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:23:59.408 [2024-05-15 10:17:45.052986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884311 ] 00:23:59.408 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.408 [2024-05-15 10:17:45.102117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.408 [2024-05-15 10:17:45.130017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.670 [2024-05-15 10:17:45.241308] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.670 [2024-05-15 10:17:45.241369] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:00.244 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:00.244 10:17:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:00.244 10:17:45 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:00.244 Running I/O for 10 seconds... 00:24:10.261 00:24:10.261 Latency(us) 00:24:10.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.261 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.261 Verification LBA range: start 0x0 length 0x2000 00:24:10.261 TLSTESTn1 : 10.09 1357.75 5.30 0.00 0.00 93910.08 6171.31 185248.43 00:24:10.261 =================================================================================================================== 00:24:10.261 Total : 1357.75 5.30 0.00 0.00 93910.08 6171.31 185248.43 00:24:10.261 0 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2884311 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2884311 ']' 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2884311 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:10.261 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2884311 00:24:10.523 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:24:10.523 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:24:10.523 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2884311' 00:24:10.523 killing process with pid 2884311 00:24:10.523 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2884311 00:24:10.523 Received shutdown signal, test time was about 10.000000 seconds 00:24:10.523 00:24:10.523 Latency(us) 00:24:10.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.524 =================================================================================================================== 00:24:10.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.524 [2024-05-15 10:17:56.092433] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2884311 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2883965 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2883965 ']' 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2883965 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2883965 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2883965' 00:24:10.524 killing process with pid 2883965 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2883965 00:24:10.524 [2024-05-15 10:17:56.252243] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:10.524 [2024-05-15 10:17:56.252282] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:10.524 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2883965 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2886334 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2886334 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2886334 ']' 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:10.786 10:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.786 [2024-05-15 10:17:56.421839] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:10.786 [2024-05-15 10:17:56.421889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.786 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.786 [2024-05-15 10:17:56.486494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.786 [2024-05-15 10:17:56.515585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.786 [2024-05-15 10:17:56.515623] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.786 [2024-05-15 10:17:56.515630] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.786 [2024-05-15 10:17:56.515637] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.786 [2024-05-15 10:17:56.515643] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.786 [2024-05-15 10:17:56.515660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.GgiKPXKC3x 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GgiKPXKC3x 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:11.729 [2024-05-15 10:17:57.380160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.729 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:11.989 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:11.989 [2024-05-15 10:17:57.716982] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:11.989 [2024-05-15 10:17:57.717028] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.989 [2024-05-15 10:17:57.717216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.989 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:12.248 malloc0 00:24:12.248 10:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GgiKPXKC3x 00:24:12.508 [2024-05-15 10:17:58.204902] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2886754 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2886754 /var/tmp/bdevperf.sock 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2886754 ']' 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:12.508 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.508 [2024-05-15 10:17:58.269622] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:12.508 [2024-05-15 10:17:58.269671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886754 ] 00:24:12.508 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.768 [2024-05-15 10:17:58.343215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.768 [2024-05-15 10:17:58.371640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.768 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:12.768 10:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:12.768 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GgiKPXKC3x 00:24:13.032 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:13.032 [2024-05-15 10:17:58.711127] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.032 nvme0n1 00:24:13.032 10:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.337 Running I/O for 1 seconds... 00:24:14.283 00:24:14.283 Latency(us) 00:24:14.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.283 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:14.283 Verification LBA range: start 0x0 length 0x2000 00:24:14.283 nvme0n1 : 1.09 999.41 3.90 0.00 0.00 123784.14 6144.00 180879.36 00:24:14.283 =================================================================================================================== 00:24:14.283 Total : 999.41 3.90 0.00 0.00 123784.14 6144.00 180879.36 00:24:14.283 0 00:24:14.283 10:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2886754 00:24:14.283 10:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2886754 ']' 00:24:14.283 10:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2886754 00:24:14.283 10:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:14.283 10:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:14.283 10:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2886754 00:24:14.283 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:14.283 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:14.283 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2886754' 00:24:14.283 killing process with pid 2886754 00:24:14.283 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2886754 00:24:14.283 Received shutdown signal, test time was about 1.000000 seconds 00:24:14.283 00:24:14.283 Latency(us) 00:24:14.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.284 =================================================================================================================== 00:24:14.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.284 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2886754 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2886334 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2886334 ']' 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2886334 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2886334 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2886334' 00:24:14.545 killing process with pid 2886334 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2886334 00:24:14.545 [2024-05-15 10:18:00.210790] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:14.545 [2024-05-15 10:18:00.210831] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2886334 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:14.545 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2887303 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2887303 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2887303 ']' 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:14.807 10:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.807 [2024-05-15 10:18:00.393926] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:14.807 [2024-05-15 10:18:00.393983] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.807 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.807 [2024-05-15 10:18:00.457773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.807 [2024-05-15 10:18:00.488319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.807 [2024-05-15 10:18:00.488359] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.807 [2024-05-15 10:18:00.488367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.807 [2024-05-15 10:18:00.488373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.807 [2024-05-15 10:18:00.488379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.807 [2024-05-15 10:18:00.488395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.381 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:15.381 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:15.381 10:18:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.381 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:15.381 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.644 [2024-05-15 10:18:01.200841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.644 malloc0 00:24:15.644 [2024-05-15 10:18:01.227524] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:15.644 [2024-05-15 10:18:01.227571] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:15.644 [2024-05-15 10:18:01.227752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2887472 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2887472 /var/tmp/bdevperf.sock 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2887472 ']' 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:15.644 10:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.644 [2024-05-15 10:18:01.303137] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:15.644 [2024-05-15 10:18:01.303183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887472 ] 00:24:15.644 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.644 [2024-05-15 10:18:01.379764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.644 [2024-05-15 10:18:01.408342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.591 10:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:16.591 10:18:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:16.591 10:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GgiKPXKC3x 00:24:16.591 10:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:16.591 [2024-05-15 10:18:02.365185] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.852 nvme0n1 00:24:16.852 10:18:02 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.852 Running I/O for 1 seconds... 00:24:18.242 00:24:18.242 Latency(us) 00:24:18.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.242 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:18.242 Verification LBA range: start 0x0 length 0x2000 00:24:18.242 nvme0n1 : 1.10 1040.58 4.06 0.00 0.00 118855.96 6253.23 173015.04 00:24:18.242 =================================================================================================================== 00:24:18.242 Total : 1040.58 4.06 0.00 0.00 118855.96 6253.23 173015.04 00:24:18.242 0 00:24:18.242 10:18:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:18.242 10:18:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:18.242 10:18:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.242 10:18:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:18.242 10:18:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:18.242 "subsystems": [ 00:24:18.242 { 00:24:18.242 "subsystem": "keyring", 00:24:18.242 "config": [ 00:24:18.242 { 00:24:18.242 "method": "keyring_file_add_key", 00:24:18.242 "params": { 00:24:18.242 "name": "key0", 00:24:18.242 "path": "/tmp/tmp.GgiKPXKC3x" 00:24:18.242 } 00:24:18.242 } 00:24:18.242 ] 00:24:18.242 }, 00:24:18.242 { 00:24:18.242 "subsystem": "iobuf", 00:24:18.242 "config": [ 00:24:18.242 { 00:24:18.242 "method": "iobuf_set_options", 00:24:18.242 "params": { 00:24:18.242 "small_pool_count": 8192, 00:24:18.242 "large_pool_count": 1024, 00:24:18.242 "small_bufsize": 8192, 00:24:18.242 "large_bufsize": 135168 00:24:18.242 } 00:24:18.242 } 00:24:18.242 ] 00:24:18.242 }, 00:24:18.242 { 00:24:18.242 "subsystem": "sock", 00:24:18.242 "config": [ 00:24:18.242 { 00:24:18.242 "method": "sock_impl_set_options", 00:24:18.242 "params": { 00:24:18.242 "impl_name": "posix", 00:24:18.242 "recv_buf_size": 2097152, 00:24:18.242 "send_buf_size": 2097152, 00:24:18.242 "enable_recv_pipe": true, 00:24:18.242 "enable_quickack": false, 00:24:18.242 "enable_placement_id": 0, 00:24:18.242 "enable_zerocopy_send_server": true, 00:24:18.242 "enable_zerocopy_send_client": false, 00:24:18.242 "zerocopy_threshold": 0, 00:24:18.242 "tls_version": 0, 00:24:18.242 "enable_ktls": false 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "sock_impl_set_options", 00:24:18.243 "params": { 00:24:18.243 "impl_name": "ssl", 00:24:18.243 "recv_buf_size": 4096, 00:24:18.243 "send_buf_size": 4096, 00:24:18.243 "enable_recv_pipe": true, 00:24:18.243 "enable_quickack": false, 00:24:18.243 "enable_placement_id": 0, 00:24:18.243 "enable_zerocopy_send_server": true, 00:24:18.243 "enable_zerocopy_send_client": false, 00:24:18.243 "zerocopy_threshold": 0, 00:24:18.243 "tls_version": 0, 00:24:18.243 "enable_ktls": false 00:24:18.243 } 00:24:18.243 } 00:24:18.243 ] 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "subsystem": "vmd", 00:24:18.243 "config": [] 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "subsystem": "accel", 00:24:18.243 "config": [ 00:24:18.243 { 00:24:18.243 "method": "accel_set_options", 00:24:18.243 "params": { 00:24:18.243 "small_cache_size": 128, 00:24:18.243 "large_cache_size": 16, 00:24:18.243 "task_count": 2048, 00:24:18.243 "sequence_count": 2048, 00:24:18.243 "buf_count": 2048 00:24:18.243 } 00:24:18.243 } 00:24:18.243 ] 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "subsystem": "bdev", 00:24:18.243 "config": [ 00:24:18.243 { 00:24:18.243 "method": "bdev_set_options", 00:24:18.243 "params": { 00:24:18.243 "bdev_io_pool_size": 65535, 00:24:18.243 "bdev_io_cache_size": 256, 00:24:18.243 "bdev_auto_examine": true, 00:24:18.243 "iobuf_small_cache_size": 128, 00:24:18.243 "iobuf_large_cache_size": 16 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "bdev_raid_set_options", 00:24:18.243 "params": { 00:24:18.243 "process_window_size_kb": 1024 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "bdev_iscsi_set_options", 00:24:18.243 "params": { 00:24:18.243 "timeout_sec": 30 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "bdev_nvme_set_options", 00:24:18.243 "params": { 00:24:18.243 "action_on_timeout": "none", 00:24:18.243 "timeout_us": 0, 00:24:18.243 "timeout_admin_us": 0, 00:24:18.243 "keep_alive_timeout_ms": 10000, 00:24:18.243 "arbitration_burst": 0, 00:24:18.243 "low_priority_weight": 0, 00:24:18.243 "medium_priority_weight": 0, 00:24:18.243 "high_priority_weight": 0, 00:24:18.243 "nvme_adminq_poll_period_us": 10000, 00:24:18.243 "nvme_ioq_poll_period_us": 0, 00:24:18.243 "io_queue_requests": 0, 00:24:18.243 "delay_cmd_submit": true, 00:24:18.243 "transport_retry_count": 4, 00:24:18.243 "bdev_retry_count": 3, 00:24:18.243 "transport_ack_timeout": 0, 00:24:18.243 "ctrlr_loss_timeout_sec": 0, 00:24:18.243 "reconnect_delay_sec": 0, 00:24:18.243 "fast_io_fail_timeout_sec": 0, 00:24:18.243 "disable_auto_failback": false, 00:24:18.243 "generate_uuids": false, 00:24:18.243 "transport_tos": 0, 00:24:18.243 "nvme_error_stat": false, 00:24:18.243 "rdma_srq_size": 0, 00:24:18.243 "io_path_stat": false, 00:24:18.243 "allow_accel_sequence": false, 00:24:18.243 "rdma_max_cq_size": 0, 00:24:18.243 "rdma_cm_event_timeout_ms": 0, 00:24:18.243 "dhchap_digests": [ 00:24:18.243 "sha256", 00:24:18.243 "sha384", 00:24:18.243 "sha512" 00:24:18.243 ], 00:24:18.243 "dhchap_dhgroups": [ 00:24:18.243 "null", 00:24:18.243 "ffdhe2048", 00:24:18.243 "ffdhe3072", 00:24:18.243 "ffdhe4096", 00:24:18.243 "ffdhe6144", 00:24:18.243 "ffdhe8192" 00:24:18.243 ] 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "bdev_nvme_set_hotplug", 00:24:18.243 "params": { 00:24:18.243 "period_us": 100000, 00:24:18.243 "enable": false 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "bdev_malloc_create", 00:24:18.243 "params": { 00:24:18.243 "name": "malloc0", 00:24:18.243 "num_blocks": 8192, 00:24:18.243 "block_size": 4096, 00:24:18.243 "physical_block_size": 4096, 00:24:18.243 "uuid": "a3078e82-e07f-4e7a-81f7-bee7254da1a5", 00:24:18.243 "optimal_io_boundary": 0 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "bdev_wait_for_examine" 00:24:18.243 } 00:24:18.243 ] 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "subsystem": "nbd", 00:24:18.243 "config": [] 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "subsystem": "scheduler", 00:24:18.243 "config": [ 00:24:18.243 { 00:24:18.243 "method": "framework_set_scheduler", 00:24:18.243 "params": { 00:24:18.243 "name": "static" 00:24:18.243 } 00:24:18.243 } 00:24:18.243 ] 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "subsystem": "nvmf", 00:24:18.243 "config": [ 00:24:18.243 { 00:24:18.243 "method": "nvmf_set_config", 00:24:18.243 "params": { 00:24:18.243 "discovery_filter": "match_any", 00:24:18.243 "admin_cmd_passthru": { 00:24:18.243 "identify_ctrlr": false 00:24:18.243 } 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "nvmf_set_max_subsystems", 00:24:18.243 "params": { 00:24:18.243 "max_subsystems": 1024 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "nvmf_set_crdt", 00:24:18.243 "params": { 00:24:18.243 "crdt1": 0, 00:24:18.243 "crdt2": 0, 00:24:18.243 "crdt3": 0 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "nvmf_create_transport", 00:24:18.243 "params": { 00:24:18.243 "trtype": "TCP", 00:24:18.243 "max_queue_depth": 128, 00:24:18.243 "max_io_qpairs_per_ctrlr": 127, 00:24:18.243 "in_capsule_data_size": 4096, 00:24:18.243 "max_io_size": 131072, 00:24:18.243 "io_unit_size": 131072, 00:24:18.243 "max_aq_depth": 128, 00:24:18.243 "num_shared_buffers": 511, 00:24:18.243 "buf_cache_size": 4294967295, 00:24:18.243 "dif_insert_or_strip": false, 00:24:18.243 "zcopy": false, 00:24:18.243 "c2h_success": false, 00:24:18.243 "sock_priority": 0, 00:24:18.243 "abort_timeout_sec": 1, 00:24:18.243 "ack_timeout": 0, 00:24:18.243 "data_wr_pool_size": 0 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "nvmf_create_subsystem", 00:24:18.243 "params": { 00:24:18.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.243 "allow_any_host": false, 00:24:18.243 "serial_number": "00000000000000000000", 00:24:18.243 "model_number": "SPDK bdev Controller", 00:24:18.243 "max_namespaces": 32, 00:24:18.243 "min_cntlid": 1, 00:24:18.243 "max_cntlid": 65519, 00:24:18.243 "ana_reporting": false 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "nvmf_subsystem_add_host", 00:24:18.243 "params": { 00:24:18.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.243 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.243 "psk": "key0" 00:24:18.243 } 00:24:18.243 }, 00:24:18.243 { 00:24:18.243 "method": "nvmf_subsystem_add_ns", 00:24:18.243 "params": { 00:24:18.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.243 "namespace": { 00:24:18.243 "nsid": 1, 00:24:18.243 "bdev_name": "malloc0", 00:24:18.244 "nguid": "A3078E82E07F4E7A81F7BEE7254DA1A5", 00:24:18.244 "uuid": "a3078e82-e07f-4e7a-81f7-bee7254da1a5", 00:24:18.244 "no_auto_visible": false 00:24:18.244 } 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "nvmf_subsystem_add_listener", 00:24:18.244 "params": { 00:24:18.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.244 "listen_address": { 00:24:18.244 "trtype": "TCP", 00:24:18.244 "adrfam": "IPv4", 00:24:18.244 "traddr": "10.0.0.2", 00:24:18.244 "trsvcid": "4420" 00:24:18.244 }, 00:24:18.244 "secure_channel": true 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }' 00:24:18.244 10:18:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:18.244 10:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:18.244 "subsystems": [ 00:24:18.244 { 00:24:18.244 "subsystem": "keyring", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "keyring_file_add_key", 00:24:18.244 "params": { 00:24:18.244 "name": "key0", 00:24:18.244 "path": "/tmp/tmp.GgiKPXKC3x" 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "iobuf", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "iobuf_set_options", 00:24:18.244 "params": { 00:24:18.244 "small_pool_count": 8192, 00:24:18.244 "large_pool_count": 1024, 00:24:18.244 "small_bufsize": 8192, 00:24:18.244 "large_bufsize": 135168 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "sock", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "sock_impl_set_options", 00:24:18.244 "params": { 00:24:18.244 "impl_name": "posix", 00:24:18.244 "recv_buf_size": 2097152, 00:24:18.244 "send_buf_size": 2097152, 00:24:18.244 "enable_recv_pipe": true, 00:24:18.244 "enable_quickack": false, 00:24:18.244 "enable_placement_id": 0, 00:24:18.244 "enable_zerocopy_send_server": true, 00:24:18.244 "enable_zerocopy_send_client": false, 00:24:18.244 "zerocopy_threshold": 0, 00:24:18.244 "tls_version": 0, 00:24:18.244 "enable_ktls": false 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "sock_impl_set_options", 00:24:18.244 "params": { 00:24:18.244 "impl_name": "ssl", 00:24:18.244 "recv_buf_size": 4096, 00:24:18.244 "send_buf_size": 4096, 00:24:18.244 "enable_recv_pipe": true, 00:24:18.244 "enable_quickack": false, 00:24:18.244 "enable_placement_id": 0, 00:24:18.244 "enable_zerocopy_send_server": true, 00:24:18.244 "enable_zerocopy_send_client": false, 00:24:18.244 "zerocopy_threshold": 0, 00:24:18.244 "tls_version": 0, 00:24:18.244 "enable_ktls": false 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "vmd", 00:24:18.244 "config": [] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "accel", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "accel_set_options", 00:24:18.244 "params": { 00:24:18.244 "small_cache_size": 128, 00:24:18.244 "large_cache_size": 16, 00:24:18.244 "task_count": 2048, 00:24:18.244 "sequence_count": 2048, 00:24:18.244 "buf_count": 2048 00:24:18.244 } 00:24:18.244 } 00:24:18.244 ] 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "subsystem": "bdev", 00:24:18.244 "config": [ 00:24:18.244 { 00:24:18.244 "method": "bdev_set_options", 00:24:18.244 "params": { 00:24:18.244 "bdev_io_pool_size": 65535, 00:24:18.244 "bdev_io_cache_size": 256, 00:24:18.244 "bdev_auto_examine": true, 00:24:18.244 "iobuf_small_cache_size": 128, 00:24:18.244 "iobuf_large_cache_size": 16 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "bdev_raid_set_options", 00:24:18.244 "params": { 00:24:18.244 "process_window_size_kb": 1024 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "bdev_iscsi_set_options", 00:24:18.244 "params": { 00:24:18.244 "timeout_sec": 30 00:24:18.244 } 00:24:18.244 }, 00:24:18.244 { 00:24:18.244 "method": "bdev_nvme_set_options", 00:24:18.244 "params": { 00:24:18.244 "action_on_timeout": "none", 00:24:18.244 "timeout_us": 0, 00:24:18.244 "timeout_admin_us": 0, 00:24:18.244 "keep_alive_timeout_ms": 10000, 00:24:18.244 "arbitration_burst": 0, 00:24:18.244 "low_priority_weight": 0, 00:24:18.244 "medium_priority_weight": 0, 00:24:18.244 "high_priority_weight": 0, 00:24:18.244 "nvme_adminq_poll_period_us": 10000, 00:24:18.244 "nvme_ioq_poll_period_us": 0, 00:24:18.244 "io_queue_requests": 512, 00:24:18.244 "delay_cmd_submit": true, 00:24:18.244 "transport_retry_count": 4, 00:24:18.244 "bdev_retry_count": 3, 00:24:18.244 "transport_ack_timeout": 0, 00:24:18.244 "ctrlr_loss_timeout_sec": 0, 00:24:18.244 "reconnect_delay_sec": 0, 00:24:18.244 "fast_io_fail_timeout_sec": 0, 00:24:18.244 "disable_auto_failback": false, 00:24:18.244 "generate_uuids": false, 00:24:18.244 "transport_tos": 0, 00:24:18.244 "nvme_error_stat": false, 00:24:18.244 "rdma_srq_size": 0, 00:24:18.244 "io_path_stat": false, 00:24:18.244 "allow_accel_sequence": false, 00:24:18.244 "rdma_max_cq_size": 0, 00:24:18.244 "rdma_cm_event_timeout_ms": 0, 00:24:18.244 "dhchap_digests": [ 00:24:18.244 "sha256", 00:24:18.244 "sha384", 00:24:18.245 "sha512" 00:24:18.245 ], 00:24:18.245 "dhchap_dhgroups": [ 00:24:18.245 "null", 00:24:18.245 "ffdhe2048", 00:24:18.245 "ffdhe3072", 00:24:18.245 "ffdhe4096", 00:24:18.245 "ffdhe6144", 00:24:18.245 "ffdhe8192" 00:24:18.245 ] 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_nvme_attach_controller", 00:24:18.245 "params": { 00:24:18.245 "name": "nvme0", 00:24:18.245 "trtype": "TCP", 00:24:18.245 "adrfam": "IPv4", 00:24:18.245 "traddr": "10.0.0.2", 00:24:18.245 "trsvcid": "4420", 00:24:18.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.245 "prchk_reftag": false, 00:24:18.245 "prchk_guard": false, 00:24:18.245 "ctrlr_loss_timeout_sec": 0, 00:24:18.245 "reconnect_delay_sec": 0, 00:24:18.245 "fast_io_fail_timeout_sec": 0, 00:24:18.245 "psk": "key0", 00:24:18.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.245 "hdgst": false, 00:24:18.245 "ddgst": false 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_nvme_set_hotplug", 00:24:18.245 "params": { 00:24:18.245 "period_us": 100000, 00:24:18.245 "enable": false 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_enable_histogram", 00:24:18.245 "params": { 00:24:18.245 "name": "nvme0n1", 00:24:18.245 "enable": true 00:24:18.245 } 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "method": "bdev_wait_for_examine" 00:24:18.245 } 00:24:18.245 ] 00:24:18.245 }, 00:24:18.245 { 00:24:18.245 "subsystem": "nbd", 00:24:18.245 "config": [] 00:24:18.245 } 00:24:18.245 ] 00:24:18.245 }' 00:24:18.245 10:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2887472 00:24:18.245 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2887472 ']' 00:24:18.245 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2887472 00:24:18.245 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:18.245 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:18.245 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2887472 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2887472' 00:24:18.507 killing process with pid 2887472 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2887472 00:24:18.507 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.507 00:24:18.507 Latency(us) 00:24:18.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.507 =================================================================================================================== 00:24:18.507 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2887472 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2887303 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2887303 ']' 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2887303 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2887303 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2887303' 00:24:18.507 killing process with pid 2887303 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2887303 00:24:18.507 [2024-05-15 10:18:04.239322] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:18.507 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2887303 00:24:18.769 10:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:18.769 10:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.769 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:18.769 10:18:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:24:18.769 "subsystems": [ 00:24:18.769 { 00:24:18.769 "subsystem": "keyring", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "keyring_file_add_key", 00:24:18.769 "params": { 00:24:18.769 "name": "key0", 00:24:18.769 "path": "/tmp/tmp.GgiKPXKC3x" 00:24:18.769 } 00:24:18.769 } 00:24:18.769 ] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "iobuf", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "iobuf_set_options", 00:24:18.769 "params": { 00:24:18.769 "small_pool_count": 8192, 00:24:18.769 "large_pool_count": 1024, 00:24:18.769 "small_bufsize": 8192, 00:24:18.769 "large_bufsize": 135168 00:24:18.769 } 00:24:18.769 } 00:24:18.769 ] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "sock", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "sock_impl_set_options", 00:24:18.769 "params": { 00:24:18.769 "impl_name": "posix", 00:24:18.769 "recv_buf_size": 2097152, 00:24:18.769 "send_buf_size": 2097152, 00:24:18.769 "enable_recv_pipe": true, 00:24:18.769 "enable_quickack": false, 00:24:18.769 "enable_placement_id": 0, 00:24:18.769 "enable_zerocopy_send_server": true, 00:24:18.769 "enable_zerocopy_send_client": false, 00:24:18.769 "zerocopy_threshold": 0, 00:24:18.769 "tls_version": 0, 00:24:18.769 "enable_ktls": false 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "sock_impl_set_options", 00:24:18.769 "params": { 00:24:18.769 "impl_name": "ssl", 00:24:18.769 "recv_buf_size": 4096, 00:24:18.769 "send_buf_size": 4096, 00:24:18.769 "enable_recv_pipe": true, 00:24:18.769 "enable_quickack": false, 00:24:18.769 "enable_placement_id": 0, 00:24:18.769 "enable_zerocopy_send_server": true, 00:24:18.769 "enable_zerocopy_send_client": false, 00:24:18.769 "zerocopy_threshold": 0, 00:24:18.769 "tls_version": 0, 00:24:18.769 "enable_ktls": false 00:24:18.769 } 00:24:18.769 } 00:24:18.769 ] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "vmd", 00:24:18.769 "config": [] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "accel", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "accel_set_options", 00:24:18.769 "params": { 00:24:18.769 "small_cache_size": 128, 00:24:18.769 "large_cache_size": 16, 00:24:18.769 "task_count": 2048, 00:24:18.769 "sequence_count": 2048, 00:24:18.769 "buf_count": 2048 00:24:18.769 } 00:24:18.769 } 00:24:18.769 ] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "bdev", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "bdev_set_options", 00:24:18.769 "params": { 00:24:18.769 "bdev_io_pool_size": 65535, 00:24:18.769 "bdev_io_cache_size": 256, 00:24:18.769 "bdev_auto_examine": true, 00:24:18.769 "iobuf_small_cache_size": 128, 00:24:18.769 "iobuf_large_cache_size": 16 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "bdev_raid_set_options", 00:24:18.769 "params": { 00:24:18.769 "process_window_size_kb": 1024 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "bdev_iscsi_set_options", 00:24:18.769 "params": { 00:24:18.769 "timeout_sec": 30 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "bdev_nvme_set_options", 00:24:18.769 "params": { 00:24:18.769 "action_on_timeout": "none", 00:24:18.769 "timeout_us": 0, 00:24:18.769 "timeout_admin_us": 0, 00:24:18.769 "keep_alive_timeout_ms": 10000, 00:24:18.769 "arbitration_burst": 0, 00:24:18.769 "low_priority_weight": 0, 00:24:18.769 "medium_priority_weight": 0, 00:24:18.769 "high_priority_weight": 0, 00:24:18.769 "nvme_adminq_poll_period_us": 10000, 00:24:18.769 "nvme_ioq_poll_period_us": 0, 00:24:18.769 "io_queue_requests": 0, 00:24:18.769 "delay_cmd_submit": true, 00:24:18.769 "transport_retry_count": 4, 00:24:18.769 "bdev_retry_count": 3, 00:24:18.769 "transport_ack_timeout": 0, 00:24:18.769 "ctrlr_loss_timeout_sec": 0, 00:24:18.769 "reconnect_delay_sec": 0, 00:24:18.769 "fast_io_fail_timeout_sec": 0, 00:24:18.769 "disable_auto_failback": false, 00:24:18.769 "generate_uuids": false, 00:24:18.769 "transport_tos": 0, 00:24:18.769 "nvme_error_stat": false, 00:24:18.769 "rdma_srq_size": 0, 00:24:18.769 "io_path_stat": false, 00:24:18.769 "allow_accel_sequence": false, 00:24:18.769 "rdma_max_cq_size": 0, 00:24:18.769 "rdma_cm_event_timeout_ms": 0, 00:24:18.769 "dhchap_digests": [ 00:24:18.769 "sha256", 00:24:18.769 "sha384", 00:24:18.769 "sha512" 00:24:18.769 ], 00:24:18.769 "dhchap_dhgroups": [ 00:24:18.769 "null", 00:24:18.769 "ffdhe2048", 00:24:18.769 "ffdhe3072", 00:24:18.769 "ffdhe4096", 00:24:18.769 "ffdhe6144", 00:24:18.769 "ffdhe8192" 00:24:18.769 ] 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "bdev_nvme_set_hotplug", 00:24:18.769 "params": { 00:24:18.769 "period_us": 100000, 00:24:18.769 "enable": false 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "bdev_malloc_create", 00:24:18.769 "params": { 00:24:18.769 "name": "malloc0", 00:24:18.769 "num_blocks": 8192, 00:24:18.769 "block_size": 4096, 00:24:18.769 "physical_block_size": 4096, 00:24:18.769 "uuid": "a3078e82-e07f-4e7a-81f7-bee7254da1a5", 00:24:18.769 "optimal_io_boundary": 0 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "bdev_wait_for_examine" 00:24:18.769 } 00:24:18.769 ] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "nbd", 00:24:18.769 "config": [] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "scheduler", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "framework_set_scheduler", 00:24:18.769 "params": { 00:24:18.769 "name": "static" 00:24:18.769 } 00:24:18.769 } 00:24:18.769 ] 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "subsystem": "nvmf", 00:24:18.769 "config": [ 00:24:18.769 { 00:24:18.769 "method": "nvmf_set_config", 00:24:18.769 "params": { 00:24:18.769 "discovery_filter": "match_any", 00:24:18.769 "admin_cmd_passthru": { 00:24:18.769 "identify_ctrlr": false 00:24:18.769 } 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "nvmf_set_max_subsystems", 00:24:18.769 "params": { 00:24:18.769 "max_subsystems": 1024 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "nvmf_set_crdt", 00:24:18.769 "params": { 00:24:18.769 "crdt1": 0, 00:24:18.769 "crdt2": 0, 00:24:18.769 "crdt3": 0 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "nvmf_create_transport", 00:24:18.769 "params": { 00:24:18.769 "trtype": "TCP", 00:24:18.769 "max_queue_depth": 128, 00:24:18.769 "max_io_qpairs_per_ctrlr": 127, 00:24:18.769 "in_capsule_data_size": 4096, 00:24:18.769 "max_io_size": 131072, 00:24:18.769 "io_unit_size": 131072, 00:24:18.769 "max_aq_depth": 128, 00:24:18.769 "num_shared_buffers": 511, 00:24:18.769 "buf_cache_size": 4294967295, 00:24:18.769 "dif_insert_or_strip": false, 00:24:18.769 "zcopy": false, 00:24:18.769 "c2h_success": false, 00:24:18.769 "sock_priority": 0, 00:24:18.769 "abort_timeout_sec": 1, 00:24:18.769 "ack_timeout": 0, 00:24:18.769 "data_wr_pool_size": 0 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "nvmf_create_subsystem", 00:24:18.769 "params": { 00:24:18.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.769 "allow_any_host": false, 00:24:18.769 "serial_number": "00000000000000000000", 00:24:18.769 "model_number": "SPDK bdev Controller", 00:24:18.769 "max_namespaces": 32, 00:24:18.769 "min_cntlid": 1, 00:24:18.769 "max_cntlid": 65519, 00:24:18.769 "ana_reporting": false 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "nvmf_subsystem_add_host", 00:24:18.769 "params": { 00:24:18.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.769 "host": "nqn.2016-06.io.spdk:host1", 00:24:18.769 "psk": "key0" 00:24:18.769 } 00:24:18.769 }, 00:24:18.769 { 00:24:18.769 "method": "nvmf_subsystem_add_ns", 00:24:18.769 "params": { 00:24:18.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.769 "namespace": { 00:24:18.769 "nsid": 1, 00:24:18.769 "bdev_name": "malloc0", 00:24:18.770 "nguid": "A3078E82E07F4E7A81F7BEE7254DA1A5", 00:24:18.770 "uuid": "a3078e82-e07f-4e7a-81f7-bee7254da1a5", 00:24:18.770 "no_auto_visible": false 00:24:18.770 } 00:24:18.770 } 00:24:18.770 }, 00:24:18.770 { 00:24:18.770 "method": "nvmf_subsystem_add_listener", 00:24:18.770 "params": { 00:24:18.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.770 "listen_address": { 00:24:18.770 "trtype": "TCP", 00:24:18.770 "adrfam": "IPv4", 00:24:18.770 "traddr": "10.0.0.2", 00:24:18.770 "trsvcid": "4420" 00:24:18.770 }, 00:24:18.770 "secure_channel": true 00:24:18.770 } 00:24:18.770 } 00:24:18.770 ] 00:24:18.770 } 00:24:18.770 ] 00:24:18.770 }' 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2888188 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2888188 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2888188 ']' 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:18.770 10:18:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.770 [2024-05-15 10:18:04.421454] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:18.770 [2024-05-15 10:18:04.421509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.770 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.770 [2024-05-15 10:18:04.485126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.770 [2024-05-15 10:18:04.515901] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.770 [2024-05-15 10:18:04.515940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.770 [2024-05-15 10:18:04.515948] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.770 [2024-05-15 10:18:04.515954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.770 [2024-05-15 10:18:04.515959] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.770 [2024-05-15 10:18:04.516017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.031 [2024-05-15 10:18:04.698643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.031 [2024-05-15 10:18:04.730630] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:19.031 [2024-05-15 10:18:04.730678] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.031 [2024-05-15 10:18:04.743611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2888299 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2888299 /var/tmp/bdevperf.sock 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2888299 ']' 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:19.604 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:19.605 10:18:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.605 10:18:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:24:19.605 "subsystems": [ 00:24:19.605 { 00:24:19.605 "subsystem": "keyring", 00:24:19.605 "config": [ 00:24:19.605 { 00:24:19.605 "method": "keyring_file_add_key", 00:24:19.605 "params": { 00:24:19.605 "name": "key0", 00:24:19.605 "path": "/tmp/tmp.GgiKPXKC3x" 00:24:19.605 } 00:24:19.605 } 00:24:19.605 ] 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "subsystem": "iobuf", 00:24:19.605 "config": [ 00:24:19.605 { 00:24:19.605 "method": "iobuf_set_options", 00:24:19.605 "params": { 00:24:19.605 "small_pool_count": 8192, 00:24:19.605 "large_pool_count": 1024, 00:24:19.605 "small_bufsize": 8192, 00:24:19.605 "large_bufsize": 135168 00:24:19.605 } 00:24:19.605 } 00:24:19.605 ] 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "subsystem": "sock", 00:24:19.605 "config": [ 00:24:19.605 { 00:24:19.605 "method": "sock_impl_set_options", 00:24:19.605 "params": { 00:24:19.605 "impl_name": "posix", 00:24:19.605 "recv_buf_size": 2097152, 00:24:19.605 "send_buf_size": 2097152, 00:24:19.605 "enable_recv_pipe": true, 00:24:19.605 "enable_quickack": false, 00:24:19.605 "enable_placement_id": 0, 00:24:19.605 "enable_zerocopy_send_server": true, 00:24:19.605 "enable_zerocopy_send_client": false, 00:24:19.605 "zerocopy_threshold": 0, 00:24:19.605 "tls_version": 0, 00:24:19.605 "enable_ktls": false 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "sock_impl_set_options", 00:24:19.605 "params": { 00:24:19.605 "impl_name": "ssl", 00:24:19.605 "recv_buf_size": 4096, 00:24:19.605 "send_buf_size": 4096, 00:24:19.605 "enable_recv_pipe": true, 00:24:19.605 "enable_quickack": false, 00:24:19.605 "enable_placement_id": 0, 00:24:19.605 "enable_zerocopy_send_server": true, 00:24:19.605 "enable_zerocopy_send_client": false, 00:24:19.605 "zerocopy_threshold": 0, 00:24:19.605 "tls_version": 0, 00:24:19.605 "enable_ktls": false 00:24:19.605 } 00:24:19.605 } 00:24:19.605 ] 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "subsystem": "vmd", 00:24:19.605 "config": [] 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "subsystem": "accel", 00:24:19.605 "config": [ 00:24:19.605 { 00:24:19.605 "method": "accel_set_options", 00:24:19.605 "params": { 00:24:19.605 "small_cache_size": 128, 00:24:19.605 "large_cache_size": 16, 00:24:19.605 "task_count": 2048, 00:24:19.605 "sequence_count": 2048, 00:24:19.605 "buf_count": 2048 00:24:19.605 } 00:24:19.605 } 00:24:19.605 ] 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "subsystem": "bdev", 00:24:19.605 "config": [ 00:24:19.605 { 00:24:19.605 "method": "bdev_set_options", 00:24:19.605 "params": { 00:24:19.605 "bdev_io_pool_size": 65535, 00:24:19.605 "bdev_io_cache_size": 256, 00:24:19.605 "bdev_auto_examine": true, 00:24:19.605 "iobuf_small_cache_size": 128, 00:24:19.605 "iobuf_large_cache_size": 16 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_raid_set_options", 00:24:19.605 "params": { 00:24:19.605 "process_window_size_kb": 1024 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_iscsi_set_options", 00:24:19.605 "params": { 00:24:19.605 "timeout_sec": 30 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_nvme_set_options", 00:24:19.605 "params": { 00:24:19.605 "action_on_timeout": "none", 00:24:19.605 "timeout_us": 0, 00:24:19.605 "timeout_admin_us": 0, 00:24:19.605 "keep_alive_timeout_ms": 10000, 00:24:19.605 "arbitration_burst": 0, 00:24:19.605 "low_priority_weight": 0, 00:24:19.605 "medium_priority_weight": 0, 00:24:19.605 "high_priority_weight": 0, 00:24:19.605 "nvme_adminq_poll_period_us": 10000, 00:24:19.605 "nvme_ioq_poll_period_us": 0, 00:24:19.605 "io_queue_requests": 512, 00:24:19.605 "delay_cmd_submit": true, 00:24:19.605 "transport_retry_count": 4, 00:24:19.605 "bdev_retry_count": 3, 00:24:19.605 "transport_ack_timeout": 0, 00:24:19.605 "ctrlr_loss_timeout_sec": 0, 00:24:19.605 "reconnect_delay_sec": 0, 00:24:19.605 "fast_io_fail_timeout_sec": 0, 00:24:19.605 "disable_auto_failback": false, 00:24:19.605 "generate_uuids": false, 00:24:19.605 "transport_tos": 0, 00:24:19.605 "nvme_error_stat": false, 00:24:19.605 "rdma_srq_size": 0, 00:24:19.605 "io_path_stat": false, 00:24:19.605 "allow_accel_sequence": false, 00:24:19.605 "rdma_max_cq_size": 0, 00:24:19.605 "rdma_cm_event_timeout_ms": 0, 00:24:19.605 "dhchap_digests": [ 00:24:19.605 "sha256", 00:24:19.605 "sha384", 00:24:19.605 "sha512" 00:24:19.605 ], 00:24:19.605 "dhchap_dhgroups": [ 00:24:19.605 "null", 00:24:19.605 "ffdhe2048", 00:24:19.605 "ffdhe3072", 00:24:19.605 "ffdhe4096", 00:24:19.605 "ffdhe6144", 00:24:19.605 "ffdhe8192" 00:24:19.605 ] 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_nvme_attach_controller", 00:24:19.605 "params": { 00:24:19.605 "name": "nvme0", 00:24:19.605 "trtype": "TCP", 00:24:19.605 "adrfam": "IPv4", 00:24:19.605 "traddr": "10.0.0.2", 00:24:19.605 "trsvcid": "4420", 00:24:19.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.605 "prchk_reftag": false, 00:24:19.605 "prchk_guard": false, 00:24:19.605 "ctrlr_loss_timeout_sec": 0, 00:24:19.605 "reconnect_delay_sec": 0, 00:24:19.605 "fast_io_fail_timeout_sec": 0, 00:24:19.605 "psk": "key0", 00:24:19.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.605 "hdgst": false, 00:24:19.605 "ddgst": false 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_nvme_set_hotplug", 00:24:19.605 "params": { 00:24:19.605 "period_us": 100000, 00:24:19.605 "enable": false 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_enable_histogram", 00:24:19.605 "params": { 00:24:19.605 "name": "nvme0n1", 00:24:19.605 "enable": true 00:24:19.605 } 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "method": "bdev_wait_for_examine" 00:24:19.605 } 00:24:19.605 ] 00:24:19.605 }, 00:24:19.605 { 00:24:19.605 "subsystem": "nbd", 00:24:19.605 "config": [] 00:24:19.605 } 00:24:19.605 ] 00:24:19.605 }' 00:24:19.605 [2024-05-15 10:18:05.269344] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:19.605 [2024-05-15 10:18:05.269395] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888299 ] 00:24:19.605 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.605 [2024-05-15 10:18:05.343399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.605 [2024-05-15 10:18:05.371906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.867 [2024-05-15 10:18:05.492052] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:20.440 10:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:20.440 10:18:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:24:20.440 10:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.440 10:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:20.440 10:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.440 10:18:06 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.701 Running I/O for 1 seconds... 00:24:21.645 00:24:21.645 Latency(us) 00:24:21.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:21.645 Verification LBA range: start 0x0 length 0x2000 00:24:21.645 nvme0n1 : 1.09 1007.02 3.93 0.00 0.00 122988.77 5625.17 167772.16 00:24:21.645 =================================================================================================================== 00:24:21.645 Total : 1007.02 3.93 0.00 0.00 122988.77 5625.17 167772.16 00:24:21.645 0 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:24:21.645 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:21.645 nvmf_trace.0 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2888299 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2888299 ']' 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2888299 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2888299 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2888299' 00:24:21.907 killing process with pid 2888299 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2888299 00:24:21.907 Received shutdown signal, test time was about 1.000000 seconds 00:24:21.907 00:24:21.907 Latency(us) 00:24:21.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.907 =================================================================================================================== 00:24:21.907 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2888299 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.907 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.907 rmmod nvme_tcp 00:24:21.907 rmmod nvme_fabrics 00:24:22.168 rmmod nvme_keyring 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2888188 ']' 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2888188 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2888188 ']' 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2888188 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2888188 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2888188' 00:24:22.168 killing process with pid 2888188 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2888188 00:24:22.168 [2024-05-15 10:18:07.782278] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2888188 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.168 10:18:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.720 10:18:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:24.720 10:18:09 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8Gp2Odtyp3 /tmp/tmp.NkFvWk76ti /tmp/tmp.GgiKPXKC3x 00:24:24.720 00:24:24.720 real 1m18.187s 00:24:24.720 user 1m56.976s 00:24:24.720 sys 0m27.297s 00:24:24.720 10:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:24.720 10:18:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.720 ************************************ 00:24:24.720 END TEST nvmf_tls 00:24:24.720 ************************************ 00:24:24.720 10:18:10 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:24.720 10:18:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:24.720 10:18:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:24.720 10:18:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:24.720 ************************************ 00:24:24.720 START TEST nvmf_fips 00:24:24.720 ************************************ 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:24.720 * Looking for test storage... 00:24:24.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.720 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:24:24.721 Error setting digest 00:24:24.721 00620FF0867F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:24.721 00620FF0867F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:24:24.721 10:18:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.875 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:32.876 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:32.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:32.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:32.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:24:32.876 00:24:32.876 --- 10.0.0.2 ping statistics --- 00:24:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.876 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.507 ms 00:24:32.876 00:24:32.876 --- 10.0.0.1 ping statistics --- 00:24:32.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.876 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2893388 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2893388 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2893388 ']' 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:32.876 10:18:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.876 [2024-05-15 10:18:17.654541] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:32.876 [2024-05-15 10:18:17.654612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.876 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.876 [2024-05-15 10:18:17.740147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.876 [2024-05-15 10:18:17.786544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.876 [2024-05-15 10:18:17.786599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.876 [2024-05-15 10:18:17.786607] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.877 [2024-05-15 10:18:17.786614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.877 [2024-05-15 10:18:17.786620] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.877 [2024-05-15 10:18:17.786643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:32.877 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.877 [2024-05-15 10:18:18.607466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.877 [2024-05-15 10:18:18.623434] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:32.877 [2024-05-15 10:18:18.623493] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.877 [2024-05-15 10:18:18.623707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.877 [2024-05-15 10:18:18.653497] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:32.877 malloc0 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2893720 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2893720 /var/tmp/bdevperf.sock 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2893720 ']' 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:33.138 10:18:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:33.138 [2024-05-15 10:18:18.743409] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:24:33.138 [2024-05-15 10:18:18.743483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893720 ] 00:24:33.139 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.139 [2024-05-15 10:18:18.800296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.139 [2024-05-15 10:18:18.836227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.085 10:18:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:34.085 10:18:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:24:34.085 10:18:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:34.085 [2024-05-15 10:18:19.655744] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.085 [2024-05-15 10:18:19.655813] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:34.085 TLSTESTn1 00:24:34.085 10:18:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:34.085 Running I/O for 10 seconds... 00:24:46.364 00:24:46.365 Latency(us) 00:24:46.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.365 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.365 Verification LBA range: start 0x0 length 0x2000 00:24:46.365 TLSTESTn1 : 10.09 1375.65 5.37 0.00 0.00 92701.98 6198.61 178257.92 00:24:46.365 =================================================================================================================== 00:24:46.365 Total : 1375.65 5.37 0.00 0.00 92701.98 6198.61 178257.92 00:24:46.365 0 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:24:46.365 10:18:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:46.365 nvmf_trace.0 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2893720 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2893720 ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2893720 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2893720 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2893720' 00:24:46.365 killing process with pid 2893720 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2893720 00:24:46.365 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.365 00:24:46.365 Latency(us) 00:24:46.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.365 =================================================================================================================== 00:24:46.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.365 [2024-05-15 10:18:30.136849] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2893720 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:46.365 rmmod nvme_tcp 00:24:46.365 rmmod nvme_fabrics 00:24:46.365 rmmod nvme_keyring 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2893388 ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2893388 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2893388 ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2893388 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2893388 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2893388' 00:24:46.365 killing process with pid 2893388 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2893388 00:24:46.365 [2024-05-15 10:18:30.383683] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:46.365 [2024-05-15 10:18:30.383715] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2893388 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.365 10:18:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.937 10:18:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:46.937 10:18:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:46.937 00:24:46.937 real 0m22.503s 00:24:46.937 user 0m23.710s 00:24:46.937 sys 0m9.531s 00:24:46.937 10:18:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:46.937 10:18:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:46.937 ************************************ 00:24:46.937 END TEST nvmf_fips 00:24:46.937 ************************************ 00:24:46.937 10:18:32 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:46.937 10:18:32 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:46.938 10:18:32 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:46.938 10:18:32 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:46.938 10:18:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:46.938 ************************************ 00:24:46.938 START TEST nvmf_fuzz 00:24:46.938 ************************************ 00:24:46.938 10:18:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:47.200 * Looking for test storage... 00:24:47.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:47.200 10:18:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.356 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:55.357 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:55.357 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:55.357 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:55.357 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.357 10:18:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:24:55.357 00:24:55.357 --- 10.0.0.2 ping statistics --- 00:24:55.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.357 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.586 ms 00:24:55.357 00:24:55.357 --- 10.0.0.1 ping statistics --- 00:24:55.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.357 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2900060 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2900060 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@828 -- # '[' -z 2900060 ']' 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@861 -- # return 0 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.357 10:18:40 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.357 Malloc0 00:24:55.357 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.357 10:18:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.357 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:55.358 10:18:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:27.487 Fuzzing completed. Shutting down the fuzz application 00:25:27.487 00:25:27.487 Dumping successful admin opcodes: 00:25:27.487 8, 9, 10, 24, 00:25:27.487 Dumping successful io opcodes: 00:25:27.487 0, 9, 00:25:27.487 NS: 0x200003aeff00 I/O qp, Total commands completed: 900134, total successful commands: 5243, random_seed: 3946647488 00:25:27.487 NS: 0x200003aeff00 admin qp, Total commands completed: 111928, total successful commands: 919, random_seed: 2686638272 00:25:27.488 10:19:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:27.488 Fuzzing completed. Shutting down the fuzz application 00:25:27.488 00:25:27.488 Dumping successful admin opcodes: 00:25:27.488 24, 00:25:27.488 Dumping successful io opcodes: 00:25:27.488 00:25:27.488 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3092147087 00:25:27.488 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3092228863 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.488 rmmod nvme_tcp 00:25:27.488 rmmod nvme_fabrics 00:25:27.488 rmmod nvme_keyring 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2900060 ']' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2900060 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@947 -- # '[' -z 2900060 ']' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # kill -0 2900060 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # uname 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2900060 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2900060' 00:25:27.488 killing process with pid 2900060 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # kill 2900060 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@971 -- # wait 2900060 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.488 10:19:12 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.404 10:19:15 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:29.404 10:19:15 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:29.404 00:25:29.404 real 0m42.401s 00:25:29.404 user 0m54.502s 00:25:29.404 sys 0m16.809s 00:25:29.404 10:19:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:29.404 10:19:15 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:29.404 ************************************ 00:25:29.404 END TEST nvmf_fuzz 00:25:29.404 ************************************ 00:25:29.404 10:19:15 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:29.404 10:19:15 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:29.404 10:19:15 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:29.404 10:19:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.404 ************************************ 00:25:29.404 START TEST nvmf_multiconnection 00:25:29.404 ************************************ 00:25:29.404 10:19:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:29.667 * Looking for test storage... 00:25:29.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:25:29.667 10:19:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:36.329 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:36.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:36.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:36.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:36.330 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.330 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:36.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:25:36.591 00:25:36.591 --- 10.0.0.2 ping statistics --- 00:25:36.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.591 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:25:36.591 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:25:36.854 00:25:36.854 --- 10.0.0.1 ping statistics --- 00:25:36.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.854 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2910388 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2910388 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@828 -- # '[' -z 2910388 ']' 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:36.854 10:19:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.854 [2024-05-15 10:19:22.503680] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:25:36.854 [2024-05-15 10:19:22.503747] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.854 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.854 [2024-05-15 10:19:22.576823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.854 [2024-05-15 10:19:22.617354] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.854 [2024-05-15 10:19:22.617400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.854 [2024-05-15 10:19:22.617408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.854 [2024-05-15 10:19:22.617415] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.854 [2024-05-15 10:19:22.617421] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.854 [2024-05-15 10:19:22.617580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.854 [2024-05-15 10:19:22.617704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.854 [2024-05-15 10:19:22.617867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.854 [2024-05-15 10:19:22.617868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@861 -- # return 0 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 [2024-05-15 10:19:23.334084] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 Malloc1 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 [2024-05-15 10:19:23.401204] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:37.800 [2024-05-15 10:19:23.401440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 Malloc2 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 Malloc3 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 Malloc4 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.800 Malloc5 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:37.800 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 Malloc6 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 Malloc7 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 Malloc8 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 Malloc9 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.062 Malloc10 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.062 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.326 Malloc11 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:38.326 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.327 10:19:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:40.244 10:19:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:40.245 10:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:25:40.245 10:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.245 10:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:40.245 10:19:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK1 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.162 10:19:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:43.548 10:19:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:43.548 10:19:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:25:43.548 10:19:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:43.548 10:19:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:43.548 10:19:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK2 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:45.464 10:19:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:47.381 10:19:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:47.381 10:19:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:25:47.381 10:19:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:47.381 10:19:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:47.381 10:19:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK3 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.297 10:19:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:50.686 10:19:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:50.686 10:19:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:25:50.686 10:19:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.686 10:19:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:50.686 10:19:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK4 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.269 10:19:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:54.660 10:19:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:54.660 10:19:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:25:54.660 10:19:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:54.660 10:19:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:54.660 10:19:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK5 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:56.614 10:19:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:58.529 10:19:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:58.529 10:19:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:25:58.529 10:19:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.529 10:19:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:25:58.529 10:19:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK6 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.449 10:19:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:02.367 10:19:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:02.367 10:19:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:26:02.367 10:19:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.367 10:19:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:26:02.367 10:19:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK7 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.286 10:19:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:06.205 10:19:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:06.205 10:19:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:26:06.205 10:19:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.205 10:19:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:26:06.205 10:19:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK8 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:08.124 10:19:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:10.041 10:19:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:10.041 10:19:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:26:10.041 10:19:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:26:10.041 10:19:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:26:10.041 10:19:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK9 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:11.605 10:19:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:13.525 10:19:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:13.525 10:19:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:26:13.525 10:19:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.525 10:19:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:26:13.525 10:19:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK10 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.078 10:20:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:17.993 10:20:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:17.993 10:20:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local i=0 00:26:17.993 10:20:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.993 10:20:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:26:17.993 10:20:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # sleep 2 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # grep -c SPDK11 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # return 0 00:26:19.910 10:20:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:19.910 [global] 00:26:19.910 thread=1 00:26:19.910 invalidate=1 00:26:19.910 rw=read 00:26:19.910 time_based=1 00:26:19.910 runtime=10 00:26:19.910 ioengine=libaio 00:26:19.910 direct=1 00:26:19.910 bs=262144 00:26:19.910 iodepth=64 00:26:19.910 norandommap=1 00:26:19.910 numjobs=1 00:26:19.910 00:26:19.910 [job0] 00:26:19.910 filename=/dev/nvme0n1 00:26:19.910 [job1] 00:26:19.910 filename=/dev/nvme10n1 00:26:19.910 [job2] 00:26:19.910 filename=/dev/nvme1n1 00:26:19.910 [job3] 00:26:19.910 filename=/dev/nvme2n1 00:26:19.910 [job4] 00:26:19.910 filename=/dev/nvme3n1 00:26:19.910 [job5] 00:26:19.910 filename=/dev/nvme4n1 00:26:19.910 [job6] 00:26:19.910 filename=/dev/nvme5n1 00:26:19.910 [job7] 00:26:19.910 filename=/dev/nvme6n1 00:26:19.910 [job8] 00:26:19.910 filename=/dev/nvme7n1 00:26:19.910 [job9] 00:26:19.910 filename=/dev/nvme8n1 00:26:19.910 [job10] 00:26:19.910 filename=/dev/nvme9n1 00:26:19.910 Could not set queue depth (nvme0n1) 00:26:19.910 Could not set queue depth (nvme10n1) 00:26:19.910 Could not set queue depth (nvme1n1) 00:26:19.910 Could not set queue depth (nvme2n1) 00:26:19.910 Could not set queue depth (nvme3n1) 00:26:19.910 Could not set queue depth (nvme4n1) 00:26:19.910 Could not set queue depth (nvme5n1) 00:26:19.910 Could not set queue depth (nvme6n1) 00:26:19.910 Could not set queue depth (nvme7n1) 00:26:19.910 Could not set queue depth (nvme8n1) 00:26:19.910 Could not set queue depth (nvme9n1) 00:26:20.186 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.186 fio-3.35 00:26:20.186 Starting 11 threads 00:26:32.446 00:26:32.446 job0: (groupid=0, jobs=1): err= 0: pid=2919148: Wed May 15 10:20:16 2024 00:26:32.446 read: IOPS=671, BW=168MiB/s (176MB/s)(1684MiB/10031msec) 00:26:32.446 slat (usec): min=6, max=127852, avg=1317.67, stdev=5119.97 00:26:32.446 clat (msec): min=6, max=420, avg=93.88, stdev=46.17 00:26:32.446 lat (msec): min=6, max=420, avg=95.19, stdev=46.67 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 21], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 58], 00:26:32.447 | 30.00th=[ 68], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 97], 00:26:32.447 | 70.00th=[ 112], 80.00th=[ 126], 90.00th=[ 146], 95.00th=[ 169], 00:26:32.447 | 99.00th=[ 230], 99.50th=[ 380], 99.90th=[ 418], 99.95th=[ 418], 00:26:32.447 | 99.99th=[ 422] 00:26:32.447 bw ( KiB/s): min=90624, max=292352, per=10.05%, avg=170803.20, stdev=51780.23, samples=20 00:26:32.447 iops : min= 354, max= 1142, avg=667.20, stdev=202.27, samples=20 00:26:32.447 lat (msec) : 10=0.21%, 20=0.73%, 50=10.97%, 100=50.84%, 250=36.45% 00:26:32.447 lat (msec) : 500=0.80% 00:26:32.447 cpu : usr=0.19%, sys=2.11%, ctx=1594, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=6735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job1: (groupid=0, jobs=1): err= 0: pid=2919149: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=593, BW=148MiB/s (156MB/s)(1499MiB/10100msec) 00:26:32.447 slat (usec): min=7, max=132454, avg=1289.33, stdev=6017.60 00:26:32.447 clat (msec): min=5, max=359, avg=106.38, stdev=52.43 00:26:32.447 lat (msec): min=7, max=375, avg=107.67, stdev=53.05 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 44], 20.00th=[ 64], 00:26:32.447 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 101], 60.00th=[ 116], 00:26:32.447 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 169], 95.00th=[ 192], 00:26:32.447 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 359], 00:26:32.447 | 99.99th=[ 359] 00:26:32.447 bw ( KiB/s): min=60416, max=238592, per=8.93%, avg=151884.80, stdev=47543.15, samples=20 00:26:32.447 iops : min= 236, max= 932, avg=593.30, stdev=185.72, samples=20 00:26:32.447 lat (msec) : 10=0.10%, 20=2.27%, 50=10.34%, 100=37.27%, 250=48.21% 00:26:32.447 lat (msec) : 500=1.82% 00:26:32.447 cpu : usr=0.19%, sys=1.99%, ctx=1608, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=5997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job2: (groupid=0, jobs=1): err= 0: pid=2919150: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=620, BW=155MiB/s (163MB/s)(1569MiB/10115msec) 00:26:32.447 slat (usec): min=6, max=227718, avg=1320.18, stdev=6195.94 00:26:32.447 clat (msec): min=10, max=364, avg=101.65, stdev=48.27 00:26:32.447 lat (msec): min=10, max=364, avg=102.97, stdev=48.94 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 19], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 54], 00:26:32.447 | 30.00th=[ 71], 40.00th=[ 89], 50.00th=[ 102], 60.00th=[ 114], 00:26:32.447 | 70.00th=[ 126], 80.00th=[ 138], 90.00th=[ 165], 95.00th=[ 186], 00:26:32.447 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 275], 00:26:32.447 | 99.99th=[ 363] 00:26:32.447 bw ( KiB/s): min=92160, max=230400, per=9.35%, avg=159052.80, stdev=41340.97, samples=20 00:26:32.447 iops : min= 360, max= 900, avg=621.30, stdev=161.49, samples=20 00:26:32.447 lat (msec) : 20=1.86%, 50=13.64%, 100=33.19%, 250=50.72%, 500=0.59% 00:26:32.447 cpu : usr=0.20%, sys=2.03%, ctx=1675, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=6276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job3: (groupid=0, jobs=1): err= 0: pid=2919151: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=567, BW=142MiB/s (149MB/s)(1430MiB/10073msec) 00:26:32.447 slat (usec): min=5, max=93338, avg=1563.68, stdev=5321.25 00:26:32.447 clat (msec): min=25, max=233, avg=111.02, stdev=36.29 00:26:32.447 lat (msec): min=26, max=233, avg=112.58, stdev=36.88 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 59], 20.00th=[ 82], 00:26:32.447 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 113], 60.00th=[ 122], 00:26:32.447 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 157], 95.00th=[ 171], 00:26:32.447 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 222], 00:26:32.447 | 99.99th=[ 234] 00:26:32.447 bw ( KiB/s): min=101376, max=284160, per=8.51%, avg=144781.60, stdev=44594.48, samples=20 00:26:32.447 iops : min= 396, max= 1110, avg=565.55, stdev=174.20, samples=20 00:26:32.447 lat (msec) : 50=7.12%, 100=28.97%, 250=63.92% 00:26:32.447 cpu : usr=0.20%, sys=1.82%, ctx=1413, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=5720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job4: (groupid=0, jobs=1): err= 0: pid=2919152: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=659, BW=165MiB/s (173MB/s)(1665MiB/10102msec) 00:26:32.447 slat (usec): min=5, max=144076, avg=1272.18, stdev=5834.77 00:26:32.447 clat (msec): min=5, max=402, avg=95.71, stdev=56.80 00:26:32.447 lat (msec): min=5, max=402, avg=96.98, stdev=57.27 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 45], 00:26:32.447 | 30.00th=[ 54], 40.00th=[ 69], 50.00th=[ 89], 60.00th=[ 107], 00:26:32.447 | 70.00th=[ 124], 80.00th=[ 142], 90.00th=[ 159], 95.00th=[ 186], 00:26:32.447 | 99.00th=[ 264], 99.50th=[ 363], 99.90th=[ 393], 99.95th=[ 397], 00:26:32.447 | 99.99th=[ 401] 00:26:32.447 bw ( KiB/s): min=54272, max=344576, per=9.93%, avg=168857.60, stdev=76863.32, samples=20 00:26:32.447 iops : min= 212, max= 1346, avg=659.60, stdev=300.25, samples=20 00:26:32.447 lat (msec) : 10=0.03%, 20=1.11%, 50=25.59%, 100=29.67%, 250=41.43% 00:26:32.447 lat (msec) : 500=2.16% 00:26:32.447 cpu : usr=0.26%, sys=2.12%, ctx=1677, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=6659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job5: (groupid=0, jobs=1): err= 0: pid=2919153: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=746, BW=187MiB/s (196MB/s)(1886MiB/10099msec) 00:26:32.447 slat (usec): min=6, max=122776, avg=1054.38, stdev=5013.26 00:26:32.447 clat (msec): min=3, max=241, avg=84.51, stdev=40.76 00:26:32.447 lat (msec): min=5, max=244, avg=85.56, stdev=41.25 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 48], 00:26:32.447 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 82], 60.00th=[ 94], 00:26:32.447 | 70.00th=[ 105], 80.00th=[ 117], 90.00th=[ 138], 95.00th=[ 159], 00:26:32.447 | 99.00th=[ 194], 99.50th=[ 224], 99.90th=[ 224], 99.95th=[ 224], 00:26:32.447 | 99.99th=[ 243] 00:26:32.447 bw ( KiB/s): min=139264, max=312320, per=11.26%, avg=191477.40, stdev=54038.69, samples=20 00:26:32.447 iops : min= 544, max= 1220, avg=747.95, stdev=211.10, samples=20 00:26:32.447 lat (msec) : 4=0.01%, 10=0.38%, 20=2.33%, 50=19.58%, 100=42.50% 00:26:32.447 lat (msec) : 250=35.18% 00:26:32.447 cpu : usr=0.27%, sys=2.36%, ctx=2123, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=7543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job6: (groupid=0, jobs=1): err= 0: pid=2919156: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=693, BW=173MiB/s (182MB/s)(1739MiB/10031msec) 00:26:32.447 slat (usec): min=5, max=86395, avg=1339.30, stdev=4395.37 00:26:32.447 clat (msec): min=10, max=229, avg=90.88, stdev=43.92 00:26:32.447 lat (msec): min=10, max=231, avg=92.22, stdev=44.51 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 38], 20.00th=[ 50], 00:26:32.447 | 30.00th=[ 57], 40.00th=[ 70], 50.00th=[ 86], 60.00th=[ 104], 00:26:32.447 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 150], 95.00th=[ 167], 00:26:32.447 | 99.00th=[ 199], 99.50th=[ 205], 99.90th=[ 218], 99.95th=[ 218], 00:26:32.447 | 99.99th=[ 230] 00:26:32.447 bw ( KiB/s): min=101888, max=380928, per=10.38%, avg=176428.10, stdev=73404.60, samples=20 00:26:32.447 iops : min= 398, max= 1488, avg=689.15, stdev=286.73, samples=20 00:26:32.447 lat (msec) : 20=1.83%, 50=19.74%, 100=37.04%, 250=41.39% 00:26:32.447 cpu : usr=0.20%, sys=2.15%, ctx=1589, majf=0, minf=4097 00:26:32.447 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:32.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.447 issued rwts: total=6954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.447 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.447 job7: (groupid=0, jobs=1): err= 0: pid=2919157: Wed May 15 10:20:16 2024 00:26:32.447 read: IOPS=622, BW=156MiB/s (163MB/s)(1575MiB/10116msec) 00:26:32.447 slat (usec): min=7, max=102997, avg=1231.19, stdev=4685.36 00:26:32.447 clat (msec): min=3, max=259, avg=101.36, stdev=38.92 00:26:32.447 lat (msec): min=5, max=259, avg=102.59, stdev=39.36 00:26:32.447 clat percentiles (msec): 00:26:32.447 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 53], 20.00th=[ 68], 00:26:32.447 | 30.00th=[ 78], 40.00th=[ 91], 50.00th=[ 101], 60.00th=[ 110], 00:26:32.447 | 70.00th=[ 121], 80.00th=[ 133], 90.00th=[ 153], 95.00th=[ 167], 00:26:32.447 | 99.00th=[ 207], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 249], 00:26:32.447 | 99.99th=[ 259] 00:26:32.447 bw ( KiB/s): min=102912, max=231424, per=9.39%, avg=159692.80, stdev=39326.03, samples=20 00:26:32.448 iops : min= 402, max= 904, avg=623.80, stdev=153.62, samples=20 00:26:32.448 lat (msec) : 4=0.02%, 10=0.08%, 20=0.43%, 50=8.36%, 100=40.64% 00:26:32.448 lat (msec) : 250=50.45%, 500=0.02% 00:26:32.448 cpu : usr=0.22%, sys=2.09%, ctx=1680, majf=0, minf=4097 00:26:32.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:32.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.448 issued rwts: total=6301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.448 job8: (groupid=0, jobs=1): err= 0: pid=2919159: Wed May 15 10:20:16 2024 00:26:32.448 read: IOPS=521, BW=130MiB/s (137MB/s)(1313MiB/10070msec) 00:26:32.448 slat (usec): min=5, max=330976, avg=1740.04, stdev=7716.51 00:26:32.448 clat (msec): min=13, max=458, avg=120.85, stdev=57.81 00:26:32.448 lat (msec): min=13, max=459, avg=122.59, stdev=58.33 00:26:32.448 clat percentiles (msec): 00:26:32.448 | 1.00th=[ 26], 5.00th=[ 47], 10.00th=[ 64], 20.00th=[ 84], 00:26:32.448 | 30.00th=[ 94], 40.00th=[ 105], 50.00th=[ 115], 60.00th=[ 126], 00:26:32.448 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 171], 95.00th=[ 209], 00:26:32.448 | 99.00th=[ 405], 99.50th=[ 451], 99.90th=[ 456], 99.95th=[ 460], 00:26:32.448 | 99.99th=[ 460] 00:26:32.448 bw ( KiB/s): min=40448, max=216064, per=7.81%, avg=132864.00, stdev=38271.52, samples=20 00:26:32.448 iops : min= 158, max= 844, avg=519.00, stdev=149.50, samples=20 00:26:32.448 lat (msec) : 20=0.32%, 50=5.92%, 100=30.08%, 250=59.85%, 500=3.83% 00:26:32.448 cpu : usr=0.18%, sys=1.65%, ctx=1213, majf=0, minf=4097 00:26:32.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:32.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.448 issued rwts: total=5253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.448 job9: (groupid=0, jobs=1): err= 0: pid=2919160: Wed May 15 10:20:16 2024 00:26:32.448 read: IOPS=520, BW=130MiB/s (136MB/s)(1313MiB/10100msec) 00:26:32.448 slat (usec): min=8, max=252632, avg=1692.73, stdev=6662.98 00:26:32.448 clat (msec): min=20, max=403, avg=121.23, stdev=45.04 00:26:32.448 lat (msec): min=20, max=403, avg=122.92, stdev=45.25 00:26:32.448 clat percentiles (msec): 00:26:32.448 | 1.00th=[ 37], 5.00th=[ 68], 10.00th=[ 79], 20.00th=[ 90], 00:26:32.448 | 30.00th=[ 99], 40.00th=[ 108], 50.00th=[ 116], 60.00th=[ 123], 00:26:32.448 | 70.00th=[ 132], 80.00th=[ 146], 90.00th=[ 165], 95.00th=[ 201], 00:26:32.448 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 397], 99.95th=[ 401], 00:26:32.448 | 99.99th=[ 405] 00:26:32.448 bw ( KiB/s): min=62464, max=201216, per=7.81%, avg=132877.10, stdev=30068.07, samples=20 00:26:32.448 iops : min= 244, max= 786, avg=519.05, stdev=117.45, samples=20 00:26:32.448 lat (msec) : 50=1.88%, 100=30.59%, 250=66.30%, 500=1.22% 00:26:32.448 cpu : usr=0.22%, sys=1.75%, ctx=1282, majf=0, minf=4097 00:26:32.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:32.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.448 issued rwts: total=5253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.448 job10: (groupid=0, jobs=1): err= 0: pid=2919161: Wed May 15 10:20:16 2024 00:26:32.448 read: IOPS=446, BW=112MiB/s (117MB/s)(1125MiB/10083msec) 00:26:32.448 slat (usec): min=8, max=200330, avg=1944.01, stdev=8272.32 00:26:32.448 clat (msec): min=24, max=365, avg=141.33, stdev=54.94 00:26:32.448 lat (msec): min=28, max=365, avg=143.27, stdev=55.20 00:26:32.448 clat percentiles (msec): 00:26:32.448 | 1.00th=[ 58], 5.00th=[ 72], 10.00th=[ 85], 20.00th=[ 100], 00:26:32.448 | 30.00th=[ 112], 40.00th=[ 125], 50.00th=[ 132], 60.00th=[ 140], 00:26:32.448 | 70.00th=[ 153], 80.00th=[ 169], 90.00th=[ 211], 95.00th=[ 275], 00:26:32.448 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 368], 00:26:32.448 | 99.99th=[ 368] 00:26:32.448 bw ( KiB/s): min=50176, max=153600, per=6.68%, avg=113536.00, stdev=26776.89, samples=20 00:26:32.448 iops : min= 196, max= 600, avg=443.50, stdev=104.60, samples=20 00:26:32.448 lat (msec) : 50=0.51%, 100=19.68%, 250=73.01%, 500=6.80% 00:26:32.448 cpu : usr=0.15%, sys=1.42%, ctx=1017, majf=0, minf=3534 00:26:32.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:32.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:32.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:32.448 issued rwts: total=4498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:32.448 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:32.448 00:26:32.448 Run status group 0 (all jobs): 00:26:32.448 READ: bw=1660MiB/s (1741MB/s), 112MiB/s-187MiB/s (117MB/s-196MB/s), io=16.4GiB (17.6GB), run=10031-10116msec 00:26:32.448 00:26:32.448 Disk stats (read/write): 00:26:32.448 nvme0n1: ios=13071/0, merge=0/0, ticks=1217512/0, in_queue=1217512, util=96.46% 00:26:32.448 nvme10n1: ios=11738/0, merge=0/0, ticks=1215652/0, in_queue=1215652, util=96.68% 00:26:32.448 nvme1n1: ios=12493/0, merge=0/0, ticks=1241985/0, in_queue=1241985, util=97.17% 00:26:32.448 nvme2n1: ios=11169/0, merge=0/0, ticks=1214018/0, in_queue=1214018, util=97.29% 00:26:32.448 nvme3n1: ios=13124/0, merge=0/0, ticks=1219794/0, in_queue=1219794, util=97.45% 00:26:32.448 nvme4n1: ios=14835/0, merge=0/0, ticks=1211279/0, in_queue=1211279, util=97.87% 00:26:32.448 nvme5n1: ios=13393/0, merge=0/0, ticks=1214872/0, in_queue=1214872, util=98.05% 00:26:32.448 nvme6n1: ios=12554/0, merge=0/0, ticks=1246691/0, in_queue=1246691, util=98.31% 00:26:32.448 nvme7n1: ios=10238/0, merge=0/0, ticks=1215481/0, in_queue=1215481, util=98.74% 00:26:32.448 nvme8n1: ios=10250/0, merge=0/0, ticks=1208600/0, in_queue=1208600, util=98.98% 00:26:32.448 nvme9n1: ios=8740/0, merge=0/0, ticks=1215446/0, in_queue=1215446, util=99.24% 00:26:32.448 10:20:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:32.448 [global] 00:26:32.448 thread=1 00:26:32.448 invalidate=1 00:26:32.448 rw=randwrite 00:26:32.448 time_based=1 00:26:32.448 runtime=10 00:26:32.448 ioengine=libaio 00:26:32.448 direct=1 00:26:32.448 bs=262144 00:26:32.448 iodepth=64 00:26:32.448 norandommap=1 00:26:32.448 numjobs=1 00:26:32.448 00:26:32.448 [job0] 00:26:32.448 filename=/dev/nvme0n1 00:26:32.448 [job1] 00:26:32.448 filename=/dev/nvme10n1 00:26:32.448 [job2] 00:26:32.448 filename=/dev/nvme1n1 00:26:32.448 [job3] 00:26:32.448 filename=/dev/nvme2n1 00:26:32.448 [job4] 00:26:32.448 filename=/dev/nvme3n1 00:26:32.448 [job5] 00:26:32.448 filename=/dev/nvme4n1 00:26:32.448 [job6] 00:26:32.448 filename=/dev/nvme5n1 00:26:32.448 [job7] 00:26:32.448 filename=/dev/nvme6n1 00:26:32.448 [job8] 00:26:32.448 filename=/dev/nvme7n1 00:26:32.448 [job9] 00:26:32.448 filename=/dev/nvme8n1 00:26:32.448 [job10] 00:26:32.448 filename=/dev/nvme9n1 00:26:32.448 Could not set queue depth (nvme0n1) 00:26:32.448 Could not set queue depth (nvme10n1) 00:26:32.448 Could not set queue depth (nvme1n1) 00:26:32.448 Could not set queue depth (nvme2n1) 00:26:32.448 Could not set queue depth (nvme3n1) 00:26:32.448 Could not set queue depth (nvme4n1) 00:26:32.448 Could not set queue depth (nvme5n1) 00:26:32.448 Could not set queue depth (nvme6n1) 00:26:32.448 Could not set queue depth (nvme7n1) 00:26:32.448 Could not set queue depth (nvme8n1) 00:26:32.448 Could not set queue depth (nvme9n1) 00:26:32.448 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:32.448 fio-3.35 00:26:32.448 Starting 11 threads 00:26:42.451 00:26:42.451 job0: (groupid=0, jobs=1): err= 0: pid=2920888: Wed May 15 10:20:27 2024 00:26:42.451 write: IOPS=249, BW=62.4MiB/s (65.4MB/s)(634MiB/10159msec); 0 zone resets 00:26:42.451 slat (usec): min=26, max=131496, avg=3434.67, stdev=8659.85 00:26:42.451 clat (msec): min=7, max=415, avg=252.97, stdev=100.66 00:26:42.451 lat (msec): min=7, max=415, avg=256.41, stdev=102.22 00:26:42.451 clat percentiles (msec): 00:26:42.451 | 1.00th=[ 29], 5.00th=[ 65], 10.00th=[ 108], 20.00th=[ 155], 00:26:42.451 | 30.00th=[ 190], 40.00th=[ 226], 50.00th=[ 266], 60.00th=[ 305], 00:26:42.451 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 372], 95.00th=[ 388], 00:26:42.451 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 418], 99.95th=[ 418], 00:26:42.451 | 99.99th=[ 418] 00:26:42.451 bw ( KiB/s): min=40960, max=112640, per=5.24%, avg=63257.60, stdev=23247.44, samples=20 00:26:42.451 iops : min= 160, max= 440, avg=247.10, stdev=90.81, samples=20 00:26:42.451 lat (msec) : 10=0.04%, 20=0.08%, 50=3.24%, 100=4.54%, 250=35.71% 00:26:42.451 lat (msec) : 500=56.39% 00:26:42.451 cpu : usr=0.62%, sys=0.82%, ctx=1066, majf=0, minf=1 00:26:42.451 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:26:42.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.451 issued rwts: total=0,2534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.451 job1: (groupid=0, jobs=1): err= 0: pid=2920911: Wed May 15 10:20:27 2024 00:26:42.451 write: IOPS=472, BW=118MiB/s (124MB/s)(1190MiB/10080msec); 0 zone resets 00:26:42.451 slat (usec): min=21, max=110888, avg=2020.82, stdev=5372.10 00:26:42.451 clat (msec): min=11, max=266, avg=133.40, stdev=47.32 00:26:42.451 lat (msec): min=17, max=266, avg=135.42, stdev=47.80 00:26:42.451 clat percentiles (msec): 00:26:42.451 | 1.00th=[ 73], 5.00th=[ 80], 10.00th=[ 85], 20.00th=[ 92], 00:26:42.451 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 118], 60.00th=[ 136], 00:26:42.451 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 203], 95.00th=[ 224], 00:26:42.451 | 99.00th=[ 255], 99.50th=[ 259], 99.90th=[ 266], 99.95th=[ 266], 00:26:42.451 | 99.99th=[ 266] 00:26:42.451 bw ( KiB/s): min=71680, max=185344, per=9.97%, avg=120279.40, stdev=37389.26, samples=20 00:26:42.451 iops : min= 280, max= 724, avg=469.80, stdev=146.07, samples=20 00:26:42.451 lat (msec) : 20=0.04%, 50=0.23%, 100=34.32%, 250=64.08%, 500=1.32% 00:26:42.451 cpu : usr=1.03%, sys=1.26%, ctx=1355, majf=0, minf=1 00:26:42.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:42.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.451 issued rwts: total=0,4761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.451 job2: (groupid=0, jobs=1): err= 0: pid=2920916: Wed May 15 10:20:27 2024 00:26:42.451 write: IOPS=549, BW=137MiB/s (144MB/s)(1388MiB/10099msec); 0 zone resets 00:26:42.451 slat (usec): min=24, max=129821, avg=1581.50, stdev=3751.11 00:26:42.452 clat (msec): min=15, max=378, avg=114.80, stdev=41.56 00:26:42.452 lat (msec): min=16, max=381, avg=116.38, stdev=41.90 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 53], 5.00th=[ 73], 10.00th=[ 79], 20.00th=[ 89], 00:26:42.452 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 106], 60.00th=[ 112], 00:26:42.452 | 70.00th=[ 121], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 180], 00:26:42.452 | 99.00th=[ 309], 99.50th=[ 342], 99.90th=[ 372], 99.95th=[ 376], 00:26:42.452 | 99.99th=[ 380] 00:26:42.452 bw ( KiB/s): min=75776, max=195584, per=11.64%, avg=140441.60, stdev=29142.65, samples=20 00:26:42.452 iops : min= 296, max= 764, avg=548.60, stdev=113.84, samples=20 00:26:42.452 lat (msec) : 20=0.02%, 50=0.79%, 100=38.90%, 250=58.52%, 500=1.77% 00:26:42.452 cpu : usr=1.13%, sys=1.66%, ctx=1945, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,5550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job3: (groupid=0, jobs=1): err= 0: pid=2920917: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=458, BW=115MiB/s (120MB/s)(1157MiB/10092msec); 0 zone resets 00:26:42.452 slat (usec): min=27, max=143825, avg=2157.56, stdev=5382.71 00:26:42.452 clat (msec): min=9, max=285, avg=137.39, stdev=47.18 00:26:42.452 lat (msec): min=9, max=285, avg=139.54, stdev=47.59 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 40], 5.00th=[ 81], 10.00th=[ 87], 20.00th=[ 96], 00:26:42.452 | 30.00th=[ 105], 40.00th=[ 117], 50.00th=[ 128], 60.00th=[ 144], 00:26:42.452 | 70.00th=[ 159], 80.00th=[ 182], 90.00th=[ 207], 95.00th=[ 228], 00:26:42.452 | 99.00th=[ 255], 99.50th=[ 264], 99.90th=[ 275], 99.95th=[ 288], 00:26:42.452 | 99.99th=[ 288] 00:26:42.452 bw ( KiB/s): min=71680, max=182272, per=9.68%, avg=116838.40, stdev=33619.09, samples=20 00:26:42.452 iops : min= 280, max= 712, avg=456.40, stdev=131.32, samples=20 00:26:42.452 lat (msec) : 10=0.04%, 20=0.35%, 50=0.95%, 100=24.36%, 250=73.05% 00:26:42.452 lat (msec) : 500=1.25% 00:26:42.452 cpu : usr=1.02%, sys=1.48%, ctx=1172, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,4627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job4: (groupid=0, jobs=1): err= 0: pid=2920918: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=327, BW=81.8MiB/s (85.8MB/s)(830MiB/10147msec); 0 zone resets 00:26:42.452 slat (usec): min=27, max=302039, avg=2603.78, stdev=8167.12 00:26:42.452 clat (msec): min=63, max=569, avg=192.81, stdev=68.60 00:26:42.452 lat (msec): min=65, max=632, avg=195.41, stdev=69.21 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 106], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 155], 00:26:42.452 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 180], 00:26:42.452 | 70.00th=[ 190], 80.00th=[ 209], 90.00th=[ 275], 95.00th=[ 372], 00:26:42.452 | 99.00th=[ 447], 99.50th=[ 472], 99.90th=[ 567], 99.95th=[ 567], 00:26:42.452 | 99.99th=[ 567] 00:26:42.452 bw ( KiB/s): min=24064, max=108544, per=6.91%, avg=83379.20, stdev=21494.55, samples=20 00:26:42.452 iops : min= 94, max= 424, avg=325.70, stdev=83.96, samples=20 00:26:42.452 lat (msec) : 100=0.75%, 250=87.26%, 500=11.50%, 750=0.48% 00:26:42.452 cpu : usr=0.84%, sys=0.80%, ctx=1168, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,3321,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job5: (groupid=0, jobs=1): err= 0: pid=2920921: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=379, BW=94.8MiB/s (99.4MB/s)(963MiB/10154msec); 0 zone resets 00:26:42.452 slat (usec): min=24, max=96460, avg=2182.82, stdev=5818.87 00:26:42.452 clat (msec): min=23, max=413, avg=166.50, stdev=67.36 00:26:42.452 lat (msec): min=23, max=419, avg=168.68, stdev=68.02 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 63], 5.00th=[ 81], 10.00th=[ 87], 20.00th=[ 103], 00:26:42.452 | 30.00th=[ 124], 40.00th=[ 140], 50.00th=[ 155], 60.00th=[ 174], 00:26:42.452 | 70.00th=[ 203], 80.00th=[ 224], 90.00th=[ 266], 95.00th=[ 288], 00:26:42.452 | 99.00th=[ 355], 99.50th=[ 380], 99.90th=[ 409], 99.95th=[ 414], 00:26:42.452 | 99.99th=[ 414] 00:26:42.452 bw ( KiB/s): min=49152, max=183808, per=8.04%, avg=96972.80, stdev=33935.17, samples=20 00:26:42.452 iops : min= 192, max= 718, avg=378.80, stdev=132.56, samples=20 00:26:42.452 lat (msec) : 50=0.31%, 100=18.38%, 250=69.10%, 500=12.20% 00:26:42.452 cpu : usr=0.84%, sys=1.24%, ctx=1520, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,3851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job6: (groupid=0, jobs=1): err= 0: pid=2920922: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=488, BW=122MiB/s (128MB/s)(1243MiB/10183msec); 0 zone resets 00:26:42.452 slat (usec): min=26, max=110179, avg=2009.87, stdev=4375.82 00:26:42.452 clat (msec): min=20, max=363, avg=129.03, stdev=58.51 00:26:42.452 lat (msec): min=20, max=363, avg=131.04, stdev=59.22 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 69], 20.00th=[ 80], 00:26:42.452 | 30.00th=[ 87], 40.00th=[ 99], 50.00th=[ 124], 60.00th=[ 138], 00:26:42.452 | 70.00th=[ 148], 80.00th=[ 161], 90.00th=[ 211], 95.00th=[ 262], 00:26:42.452 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 347], 99.95th=[ 347], 00:26:42.452 | 99.99th=[ 363] 00:26:42.452 bw ( KiB/s): min=61440, max=248832, per=10.41%, avg=125619.20, stdev=51796.45, samples=20 00:26:42.452 iops : min= 240, max= 972, avg=490.70, stdev=202.33, samples=20 00:26:42.452 lat (msec) : 50=0.24%, 100=40.54%, 250=53.09%, 500=6.14% 00:26:42.452 cpu : usr=0.98%, sys=1.31%, ctx=1278, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,4971,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job7: (groupid=0, jobs=1): err= 0: pid=2920923: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=290, BW=72.6MiB/s (76.1MB/s)(740MiB/10183msec); 0 zone resets 00:26:42.452 slat (usec): min=24, max=59747, avg=3382.66, stdev=6282.73 00:26:42.452 clat (msec): min=24, max=399, avg=216.86, stdev=37.90 00:26:42.452 lat (msec): min=24, max=399, avg=220.24, stdev=37.95 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 104], 5.00th=[ 140], 10.00th=[ 165], 20.00th=[ 201], 00:26:42.452 | 30.00th=[ 213], 40.00th=[ 220], 50.00th=[ 224], 60.00th=[ 228], 00:26:42.452 | 70.00th=[ 232], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 262], 00:26:42.452 | 99.00th=[ 305], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 401], 00:26:42.452 | 99.99th=[ 401] 00:26:42.452 bw ( KiB/s): min=59904, max=100864, per=6.14%, avg=74095.00, stdev=8996.28, samples=20 00:26:42.452 iops : min= 234, max= 394, avg=289.40, stdev=35.09, samples=20 00:26:42.452 lat (msec) : 50=0.27%, 100=0.68%, 250=89.52%, 500=9.53% 00:26:42.452 cpu : usr=0.72%, sys=0.66%, ctx=784, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,2958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job8: (groupid=0, jobs=1): err= 0: pid=2920925: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=639, BW=160MiB/s (168MB/s)(1616MiB/10116msec); 0 zone resets 00:26:42.452 slat (usec): min=14, max=103294, avg=1508.81, stdev=3600.36 00:26:42.452 clat (msec): min=6, max=307, avg=98.57, stdev=45.64 00:26:42.452 lat (msec): min=6, max=307, avg=100.08, stdev=46.21 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 22], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 65], 00:26:42.452 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 96], 00:26:42.452 | 70.00th=[ 103], 80.00th=[ 122], 90.00th=[ 161], 95.00th=[ 201], 00:26:42.452 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 300], 00:26:42.452 | 99.99th=[ 309] 00:26:42.452 bw ( KiB/s): min=74240, max=256000, per=13.58%, avg=163910.90, stdev=54580.60, samples=20 00:26:42.452 iops : min= 290, max= 1000, avg=640.25, stdev=213.19, samples=20 00:26:42.452 lat (msec) : 10=0.23%, 20=0.65%, 50=1.72%, 100=63.71%, 250=32.45% 00:26:42.452 lat (msec) : 500=1.24% 00:26:42.452 cpu : usr=1.15%, sys=1.43%, ctx=1856, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.452 issued rwts: total=0,6465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.452 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.452 job9: (groupid=0, jobs=1): err= 0: pid=2920926: Wed May 15 10:20:27 2024 00:26:42.452 write: IOPS=269, BW=67.4MiB/s (70.7MB/s)(684MiB/10142msec); 0 zone resets 00:26:42.452 slat (usec): min=26, max=55674, avg=3588.90, stdev=7540.56 00:26:42.452 clat (msec): min=24, max=417, avg=233.66, stdev=107.85 00:26:42.452 lat (msec): min=24, max=417, avg=237.24, stdev=109.35 00:26:42.452 clat percentiles (msec): 00:26:42.452 | 1.00th=[ 73], 5.00th=[ 81], 10.00th=[ 99], 20.00th=[ 124], 00:26:42.452 | 30.00th=[ 142], 40.00th=[ 157], 50.00th=[ 241], 60.00th=[ 288], 00:26:42.452 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[ 376], 95.00th=[ 388], 00:26:42.452 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 418], 99.95th=[ 418], 00:26:42.452 | 99.99th=[ 418] 00:26:42.452 bw ( KiB/s): min=40960, max=157184, per=5.67%, avg=68377.60, stdev=34074.77, samples=20 00:26:42.452 iops : min= 160, max= 614, avg=267.10, stdev=133.10, samples=20 00:26:42.452 lat (msec) : 50=0.29%, 100=10.05%, 250=40.95%, 500=48.70% 00:26:42.452 cpu : usr=0.60%, sys=0.80%, ctx=795, majf=0, minf=1 00:26:42.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:42.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.453 issued rwts: total=0,2735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.453 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.453 job10: (groupid=0, jobs=1): err= 0: pid=2920927: Wed May 15 10:20:27 2024 00:26:42.453 write: IOPS=618, BW=155MiB/s (162MB/s)(1557MiB/10067msec); 0 zone resets 00:26:42.453 slat (usec): min=24, max=69539, avg=1514.89, stdev=3689.50 00:26:42.453 clat (msec): min=23, max=292, avg=101.91, stdev=46.45 00:26:42.453 lat (msec): min=23, max=292, avg=103.43, stdev=47.03 00:26:42.453 clat percentiles (msec): 00:26:42.453 | 1.00th=[ 40], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 72], 00:26:42.453 | 30.00th=[ 75], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 86], 00:26:42.453 | 70.00th=[ 99], 80.00th=[ 134], 90.00th=[ 184], 95.00th=[ 203], 00:26:42.453 | 99.00th=[ 251], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 292], 00:26:42.453 | 99.99th=[ 292] 00:26:42.453 bw ( KiB/s): min=67584, max=235520, per=13.08%, avg=157784.20, stdev=57898.18, samples=20 00:26:42.453 iops : min= 264, max= 920, avg=616.30, stdev=226.20, samples=20 00:26:42.453 lat (msec) : 50=1.56%, 100=69.11%, 250=28.28%, 500=1.04% 00:26:42.453 cpu : usr=1.25%, sys=1.91%, ctx=1890, majf=0, minf=1 00:26:42.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:42.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:42.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:42.453 issued rwts: total=0,6226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:42.453 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:42.453 00:26:42.453 Run status group 0 (all jobs): 00:26:42.453 WRITE: bw=1178MiB/s (1236MB/s), 62.4MiB/s-160MiB/s (65.4MB/s-168MB/s), io=11.7GiB (12.6GB), run=10067-10183msec 00:26:42.453 00:26:42.453 Disk stats (read/write): 00:26:42.453 nvme0n1: ios=51/5018, merge=0/0, ticks=1232/1226241, in_queue=1227473, util=99.77% 00:26:42.453 nvme10n1: ios=55/9213, merge=0/0, ticks=3439/1188085, in_queue=1191524, util=99.95% 00:26:42.453 nvme1n1: ios=44/11095, merge=0/0, ticks=1349/1234769, in_queue=1236118, util=100.00% 00:26:42.453 nvme2n1: ios=47/8912, merge=0/0, ticks=2503/1183448, in_queue=1185951, util=99.98% 00:26:42.453 nvme3n1: ios=39/6594, merge=0/0, ticks=2993/1230141, in_queue=1233134, util=100.00% 00:26:42.453 nvme4n1: ios=0/7647, merge=0/0, ticks=0/1232414, in_queue=1232414, util=97.78% 00:26:42.453 nvme5n1: ios=15/9852, merge=0/0, ticks=164/1219999, in_queue=1220163, util=98.31% 00:26:42.453 nvme6n1: ios=0/5844, merge=0/0, ticks=0/1222513, in_queue=1222513, util=98.13% 00:26:42.453 nvme7n1: ios=44/12885, merge=0/0, ticks=1486/1220786, in_queue=1222272, util=100.00% 00:26:42.453 nvme8n1: ios=0/5426, merge=0/0, ticks=0/1227777, in_queue=1227777, util=98.89% 00:26:42.453 nvme9n1: ios=44/12070, merge=0/0, ticks=1301/1200221, in_queue=1201522, util=100.00% 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:42.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK1 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:42.453 10:20:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.453 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:43.024 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK2 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.024 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:43.285 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK3 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.285 10:20:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:43.285 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:43.285 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:43.285 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:43.285 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:43.285 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK4 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.546 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:43.807 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK5 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.807 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:44.068 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK6 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.068 10:20:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:44.330 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK7 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.330 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:44.591 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK8 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.591 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:44.851 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:44.851 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:44.851 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:44.851 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:44.851 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK9 00:26:44.851 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.852 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:45.113 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK10 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:45.113 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # local i=0 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # grep -q -w SPDK11 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1228 -- # return 0 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.113 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.113 rmmod nvme_tcp 00:26:45.374 rmmod nvme_fabrics 00:26:45.374 rmmod nvme_keyring 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2910388 ']' 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2910388 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@947 -- # '[' -z 2910388 ']' 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # kill -0 2910388 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # uname 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:45.374 10:20:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2910388 00:26:45.374 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:45.374 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:45.374 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2910388' 00:26:45.374 killing process with pid 2910388 00:26:45.374 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # kill 2910388 00:26:45.374 [2024-05-15 10:20:31.015925] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:45.374 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@971 -- # wait 2910388 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.636 10:20:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.190 10:20:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:48.190 00:26:48.190 real 1m18.212s 00:26:48.190 user 4m55.629s 00:26:48.190 sys 0m19.806s 00:26:48.191 10:20:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:48.191 10:20:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:48.191 ************************************ 00:26:48.191 END TEST nvmf_multiconnection 00:26:48.191 ************************************ 00:26:48.191 10:20:33 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:48.191 10:20:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:48.191 10:20:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:48.191 10:20:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.191 ************************************ 00:26:48.191 START TEST nvmf_initiator_timeout 00:26:48.191 ************************************ 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:48.191 * Looking for test storage... 00:26:48.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:26:48.191 10:20:33 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:54.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:54.788 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:54.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:54.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:54.788 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:26:55.050 00:26:55.050 --- 10.0.0.2 ping statistics --- 00:26:55.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.050 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.578 ms 00:26:55.050 00:26:55.050 --- 10.0.0.1 ping statistics --- 00:26:55.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.050 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2927200 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2927200 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@828 -- # '[' -z 2927200 ']' 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:55.050 10:20:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.050 [2024-05-15 10:20:40.842532] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:26:55.050 [2024-05-15 10:20:40.842600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.311 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.311 [2024-05-15 10:20:40.919145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.311 [2024-05-15 10:20:40.959324] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.311 [2024-05-15 10:20:40.959375] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.311 [2024-05-15 10:20:40.959383] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.311 [2024-05-15 10:20:40.959390] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.311 [2024-05-15 10:20:40.959396] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.311 [2024-05-15 10:20:40.959527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.311 [2024-05-15 10:20:40.959648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.311 [2024-05-15 10:20:40.959788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.311 [2024-05-15 10:20:40.959790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@861 -- # return 0 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.917 Malloc0 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.917 Delay0 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.917 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:55.917 [2024-05-15 10:20:41.707245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.179 [2024-05-15 10:20:41.747319] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:56.179 [2024-05-15 10:20:41.747572] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.179 10:20:41 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:57.567 10:20:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:57.567 10:20:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local i=0 00:26:57.567 10:20:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.567 10:20:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:26:57.567 10:20:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # sleep 2 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # return 0 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2928086 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:00.113 10:20:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:00.113 [global] 00:27:00.113 thread=1 00:27:00.113 invalidate=1 00:27:00.113 rw=write 00:27:00.113 time_based=1 00:27:00.113 runtime=60 00:27:00.113 ioengine=libaio 00:27:00.113 direct=1 00:27:00.113 bs=4096 00:27:00.113 iodepth=1 00:27:00.113 norandommap=0 00:27:00.113 numjobs=1 00:27:00.113 00:27:00.113 verify_dump=1 00:27:00.113 verify_backlog=512 00:27:00.113 verify_state_save=0 00:27:00.113 do_verify=1 00:27:00.113 verify=crc32c-intel 00:27:00.113 [job0] 00:27:00.113 filename=/dev/nvme0n1 00:27:00.113 Could not set queue depth (nvme0n1) 00:27:00.113 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:00.113 fio-3.35 00:27:00.113 Starting 1 thread 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.663 true 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.663 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.663 true 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.664 true 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:02.664 true 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.664 10:20:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.971 true 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.971 true 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.971 true 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.971 true 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:05.971 10:20:51 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2928086 00:28:02.255 00:28:02.255 job0: (groupid=0, jobs=1): err= 0: pid=2928400: Wed May 15 10:21:45 2024 00:28:02.255 read: IOPS=102, BW=410KiB/s (419kB/s)(24.0MiB/60002msec) 00:28:02.255 slat (usec): min=7, max=10187, avg=30.15, stdev=163.26 00:28:02.255 clat (usec): min=1208, max=42016k, avg=8538.04, stdev=536011.68 00:28:02.255 lat (usec): min=1235, max=42016k, avg=8568.20, stdev=536011.44 00:28:02.255 clat percentiles (usec): 00:28:02.255 | 1.00th=[ 1336], 5.00th=[ 1450], 10.00th=[ 1500], 00:28:02.255 | 20.00th=[ 1614], 30.00th=[ 1696], 40.00th=[ 1729], 00:28:02.255 | 50.00th=[ 1745], 60.00th=[ 1762], 70.00th=[ 1778], 00:28:02.255 | 80.00th=[ 1795], 90.00th=[ 1811], 95.00th=[ 1827], 00:28:02.255 | 99.00th=[ 1860], 99.50th=[ 1876], 99.90th=[ 1926], 00:28:02.255 | 99.95th=[ 1942], 99.99th=[17112761] 00:28:02.255 write: IOPS=108, BW=435KiB/s (445kB/s)(25.5MiB/60002msec); 0 zone resets 00:28:02.255 slat (usec): min=7, max=32379, avg=40.22, stdev=400.48 00:28:02.255 clat (usec): min=727, max=1618, avg=1070.46, stdev=83.45 00:28:02.255 lat (usec): min=761, max=33413, avg=1110.68, stdev=408.69 00:28:02.255 clat percentiles (usec): 00:28:02.255 | 1.00th=[ 873], 5.00th=[ 930], 10.00th=[ 979], 20.00th=[ 1004], 00:28:02.255 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1106], 00:28:02.255 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:28:02.255 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[ 1303], 99.95th=[ 1319], 00:28:02.255 | 99.99th=[ 1614] 00:28:02.255 bw ( KiB/s): min= 776, max= 3320, per=100.00%, avg=2048.00, stdev=769.13, samples=24 00:28:02.255 iops : min= 194, max= 830, avg=512.00, stdev=192.28, samples=24 00:28:02.255 lat (usec) : 750=0.05%, 1000=9.22% 00:28:02.255 lat (msec) : 2=90.72%, >=2000=0.01% 00:28:02.255 cpu : usr=0.54%, sys=0.82%, ctx=12678, majf=0, minf=1 00:28:02.255 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.255 issued rwts: total=6144,6524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.255 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:02.255 00:28:02.255 Run status group 0 (all jobs): 00:28:02.255 READ: bw=410KiB/s (419kB/s), 410KiB/s-410KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60002-60002msec 00:28:02.255 WRITE: bw=435KiB/s (445kB/s), 435KiB/s-435KiB/s (445kB/s-445kB/s), io=25.5MiB (26.7MB), run=60002-60002msec 00:28:02.255 00:28:02.255 Disk stats (read/write): 00:28:02.255 nvme0n1: ios=6197/6430, merge=0/0, ticks=11365/6219, in_queue=17584, util=99.77% 00:28:02.255 10:21:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:02.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # local i=0 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1228 -- # return 0 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:02.255 nvmf hotplug test: fio successful as expected 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:02.255 rmmod nvme_tcp 00:28:02.255 rmmod nvme_fabrics 00:28:02.255 rmmod nvme_keyring 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2927200 ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2927200 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@947 -- # '[' -z 2927200 ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # kill -0 2927200 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # uname 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2927200 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2927200' 00:28:02.255 killing process with pid 2927200 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # kill 2927200 00:28:02.255 [2024-05-15 10:21:46.191373] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@971 -- # wait 2927200 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.255 10:21:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.828 10:21:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.828 00:28:02.828 real 1m14.958s 00:28:02.828 user 4m34.098s 00:28:02.828 sys 0m7.551s 00:28:02.828 10:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:02.828 10:21:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:02.829 ************************************ 00:28:02.829 END TEST nvmf_initiator_timeout 00:28:02.829 ************************************ 00:28:02.829 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:02.829 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:02.829 10:21:48 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:02.829 10:21:48 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.829 10:21:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:11.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:11.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:11.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:11.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:11.053 10:21:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:11.053 10:21:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:11.053 10:21:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:11.053 10:21:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.053 ************************************ 00:28:11.053 START TEST nvmf_perf_adq 00:28:11.053 ************************************ 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:11.053 * Looking for test storage... 00:28:11.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.053 10:21:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:17.647 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:17.647 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:17.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:17.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:17.647 10:22:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:18.220 10:22:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:20.133 10:22:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:25.437 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:25.437 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:25.437 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:25.437 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:25.437 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:25.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:25.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:28:25.438 00:28:25.438 --- 10.0.0.2 ping statistics --- 00:28:25.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.438 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:25.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:25.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:28:25.438 00:28:25.438 --- 10.0.0.1 ping statistics --- 00:28:25.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:25.438 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:25.438 10:22:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2949727 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2949727 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2949727 ']' 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:25.438 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:25.438 [2024-05-15 10:22:11.075439] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:25.438 [2024-05-15 10:22:11.075509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.438 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.438 [2024-05-15 10:22:11.146939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:25.438 [2024-05-15 10:22:11.187576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.438 [2024-05-15 10:22:11.187621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.438 [2024-05-15 10:22:11.187629] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:25.438 [2024-05-15 10:22:11.187637] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:25.438 [2024-05-15 10:22:11.187643] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.438 [2024-05-15 10:22:11.187803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.438 [2024-05-15 10:22:11.187939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:25.438 [2024-05-15 10:22:11.188097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.438 [2024-05-15 10:22:11.188099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 [2024-05-15 10:22:12.031200] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 Malloc1 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:26.384 [2024-05-15 10:22:12.090357] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:26.384 [2024-05-15 10:22:12.090617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2949987 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:26.384 10:22:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:26.384 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:28.938 "tick_rate": 2400000000, 00:28:28.938 "poll_groups": [ 00:28:28.938 { 00:28:28.938 "name": "nvmf_tgt_poll_group_000", 00:28:28.938 "admin_qpairs": 1, 00:28:28.938 "io_qpairs": 1, 00:28:28.938 "current_admin_qpairs": 1, 00:28:28.938 "current_io_qpairs": 1, 00:28:28.938 "pending_bdev_io": 0, 00:28:28.938 "completed_nvme_io": 19457, 00:28:28.938 "transports": [ 00:28:28.938 { 00:28:28.938 "trtype": "TCP" 00:28:28.938 } 00:28:28.938 ] 00:28:28.938 }, 00:28:28.938 { 00:28:28.938 "name": "nvmf_tgt_poll_group_001", 00:28:28.938 "admin_qpairs": 0, 00:28:28.938 "io_qpairs": 1, 00:28:28.938 "current_admin_qpairs": 0, 00:28:28.938 "current_io_qpairs": 1, 00:28:28.938 "pending_bdev_io": 0, 00:28:28.938 "completed_nvme_io": 28372, 00:28:28.938 "transports": [ 00:28:28.938 { 00:28:28.938 "trtype": "TCP" 00:28:28.938 } 00:28:28.938 ] 00:28:28.938 }, 00:28:28.938 { 00:28:28.938 "name": "nvmf_tgt_poll_group_002", 00:28:28.938 "admin_qpairs": 0, 00:28:28.938 "io_qpairs": 1, 00:28:28.938 "current_admin_qpairs": 0, 00:28:28.938 "current_io_qpairs": 1, 00:28:28.938 "pending_bdev_io": 0, 00:28:28.938 "completed_nvme_io": 19835, 00:28:28.938 "transports": [ 00:28:28.938 { 00:28:28.938 "trtype": "TCP" 00:28:28.938 } 00:28:28.938 ] 00:28:28.938 }, 00:28:28.938 { 00:28:28.938 "name": "nvmf_tgt_poll_group_003", 00:28:28.938 "admin_qpairs": 0, 00:28:28.938 "io_qpairs": 1, 00:28:28.938 "current_admin_qpairs": 0, 00:28:28.938 "current_io_qpairs": 1, 00:28:28.938 "pending_bdev_io": 0, 00:28:28.938 "completed_nvme_io": 19726, 00:28:28.938 "transports": [ 00:28:28.938 { 00:28:28.938 "trtype": "TCP" 00:28:28.938 } 00:28:28.938 ] 00:28:28.938 } 00:28:28.938 ] 00:28:28.938 }' 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:28.938 10:22:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2949987 00:28:37.090 Initializing NVMe Controllers 00:28:37.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:37.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:37.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:37.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:37.090 Initialization complete. Launching workers. 00:28:37.090 ======================================================== 00:28:37.090 Latency(us) 00:28:37.090 Device Information : IOPS MiB/s Average min max 00:28:37.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10642.90 41.57 6014.65 1721.70 9923.11 00:28:37.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14343.40 56.03 4462.29 1429.05 47980.04 00:28:37.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13536.30 52.88 4743.27 1427.82 47097.08 00:28:37.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13059.70 51.01 4913.66 1318.57 45904.10 00:28:37.090 ======================================================== 00:28:37.090 Total : 51582.29 201.49 4970.60 1318.57 47980.04 00:28:37.090 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:37.090 rmmod nvme_tcp 00:28:37.090 rmmod nvme_fabrics 00:28:37.090 rmmod nvme_keyring 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2949727 ']' 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2949727 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2949727 ']' 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2949727 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2949727 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2949727' 00:28:37.090 killing process with pid 2949727 00:28:37.090 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2949727 00:28:37.090 [2024-05-15 10:22:22.387414] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2949727 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.091 10:22:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.014 10:22:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.014 10:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:39.014 10:22:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:40.403 10:22:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:42.375 10:22:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:47.675 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:47.676 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:47.676 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:47.676 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:47.676 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:47.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:28:47.676 00:28:47.676 --- 10.0.0.2 ping statistics --- 00:28:47.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.676 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:28:47.676 00:28:47.676 --- 10.0.0.1 ping statistics --- 00:28:47.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.676 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:47.676 net.core.busy_poll = 1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:47.676 net.core.busy_read = 1 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:47.676 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2954448 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2954448 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # '[' -z 2954448 ']' 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:47.937 10:22:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.199 [2024-05-15 10:22:33.736210] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:28:48.199 [2024-05-15 10:22:33.736280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.199 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.199 [2024-05-15 10:22:33.808058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.199 [2024-05-15 10:22:33.847953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.199 [2024-05-15 10:22:33.848001] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.199 [2024-05-15 10:22:33.848009] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.199 [2024-05-15 10:22:33.848016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.199 [2024-05-15 10:22:33.848021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.199 [2024-05-15 10:22:33.848169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.199 [2024-05-15 10:22:33.848322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.199 [2024-05-15 10:22:33.848421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.199 [2024-05-15 10:22:33.848421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@861 -- # return 0 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:48.773 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 [2024-05-15 10:22:34.681523] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 Malloc1 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:49.035 [2024-05-15 10:22:34.740717] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:49.035 [2024-05-15 10:22:34.740962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2954797 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:49.035 10:22:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:49.035 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:51.587 "tick_rate": 2400000000, 00:28:51.587 "poll_groups": [ 00:28:51.587 { 00:28:51.587 "name": "nvmf_tgt_poll_group_000", 00:28:51.587 "admin_qpairs": 1, 00:28:51.587 "io_qpairs": 1, 00:28:51.587 "current_admin_qpairs": 1, 00:28:51.587 "current_io_qpairs": 1, 00:28:51.587 "pending_bdev_io": 0, 00:28:51.587 "completed_nvme_io": 25127, 00:28:51.587 "transports": [ 00:28:51.587 { 00:28:51.587 "trtype": "TCP" 00:28:51.587 } 00:28:51.587 ] 00:28:51.587 }, 00:28:51.587 { 00:28:51.587 "name": "nvmf_tgt_poll_group_001", 00:28:51.587 "admin_qpairs": 0, 00:28:51.587 "io_qpairs": 3, 00:28:51.587 "current_admin_qpairs": 0, 00:28:51.587 "current_io_qpairs": 3, 00:28:51.587 "pending_bdev_io": 0, 00:28:51.587 "completed_nvme_io": 42340, 00:28:51.587 "transports": [ 00:28:51.587 { 00:28:51.587 "trtype": "TCP" 00:28:51.587 } 00:28:51.587 ] 00:28:51.587 }, 00:28:51.587 { 00:28:51.587 "name": "nvmf_tgt_poll_group_002", 00:28:51.587 "admin_qpairs": 0, 00:28:51.587 "io_qpairs": 0, 00:28:51.587 "current_admin_qpairs": 0, 00:28:51.587 "current_io_qpairs": 0, 00:28:51.587 "pending_bdev_io": 0, 00:28:51.587 "completed_nvme_io": 0, 00:28:51.587 "transports": [ 00:28:51.587 { 00:28:51.587 "trtype": "TCP" 00:28:51.587 } 00:28:51.587 ] 00:28:51.587 }, 00:28:51.587 { 00:28:51.587 "name": "nvmf_tgt_poll_group_003", 00:28:51.587 "admin_qpairs": 0, 00:28:51.587 "io_qpairs": 0, 00:28:51.587 "current_admin_qpairs": 0, 00:28:51.587 "current_io_qpairs": 0, 00:28:51.587 "pending_bdev_io": 0, 00:28:51.587 "completed_nvme_io": 0, 00:28:51.587 "transports": [ 00:28:51.587 { 00:28:51.587 "trtype": "TCP" 00:28:51.587 } 00:28:51.587 ] 00:28:51.587 } 00:28:51.587 ] 00:28:51.587 }' 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:28:51.587 10:22:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2954797 00:28:59.737 Initializing NVMe Controllers 00:28:59.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:59.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:59.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:59.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:59.737 Initialization complete. Launching workers. 00:28:59.737 ======================================================== 00:28:59.737 Latency(us) 00:28:59.737 Device Information : IOPS MiB/s Average min max 00:28:59.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6500.10 25.39 9848.16 1665.52 55488.07 00:28:59.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9135.00 35.68 7005.05 1252.72 53201.82 00:28:59.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6484.50 25.33 9872.01 1726.15 55238.44 00:28:59.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 16186.10 63.23 3965.67 1348.31 47017.21 00:28:59.737 ======================================================== 00:28:59.737 Total : 38305.69 149.63 6688.53 1252.72 55488.07 00:28:59.737 00:28:59.737 10:22:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:28:59.737 10:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:59.737 10:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:59.737 10:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:59.737 10:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:59.738 10:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:59.738 10:22:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:59.738 rmmod nvme_tcp 00:28:59.738 rmmod nvme_fabrics 00:28:59.738 rmmod nvme_keyring 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2954448 ']' 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2954448 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' -z 2954448 ']' 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # kill -0 2954448 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # uname 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2954448 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2954448' 00:28:59.738 killing process with pid 2954448 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # kill 2954448 00:28:59.738 [2024-05-15 10:22:45.070587] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@971 -- # wait 2954448 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.738 10:22:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.048 10:22:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:03.048 10:22:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:03.048 00:29:03.048 real 0m52.891s 00:29:03.048 user 2m49.665s 00:29:03.048 sys 0m10.763s 00:29:03.048 10:22:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:03.048 10:22:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:03.048 ************************************ 00:29:03.048 END TEST nvmf_perf_adq 00:29:03.048 ************************************ 00:29:03.048 10:22:48 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:03.048 10:22:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:03.048 10:22:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:03.048 10:22:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:03.048 ************************************ 00:29:03.048 START TEST nvmf_shutdown 00:29:03.048 ************************************ 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:03.048 * Looking for test storage... 00:29:03.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:03.048 ************************************ 00:29:03.048 START TEST nvmf_shutdown_tc1 00:29:03.048 ************************************ 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc1 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:03.048 10:22:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:11.207 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:11.208 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:11.208 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:11.208 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:11.208 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:11.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:29:11.208 00:29:11.208 --- 10.0.0.2 ping statistics --- 00:29:11.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.208 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.626 ms 00:29:11.208 00:29:11.208 --- 10.0.0.1 ping statistics --- 00:29:11.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.208 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:11.208 10:22:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2961116 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2961116 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2961116 ']' 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.208 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.209 [2024-05-15 10:22:56.067259] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:11.209 [2024-05-15 10:22:56.067332] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.209 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.209 [2024-05-15 10:22:56.155825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.209 [2024-05-15 10:22:56.204681] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.209 [2024-05-15 10:22:56.204737] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.209 [2024-05-15 10:22:56.204745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.209 [2024-05-15 10:22:56.204753] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.209 [2024-05-15 10:22:56.204759] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.209 [2024-05-15 10:22:56.204887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.209 [2024-05-15 10:22:56.205054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.209 [2024-05-15 10:22:56.205215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.209 [2024-05-15 10:22:56.205217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.209 [2024-05-15 10:22:56.902927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.209 10:22:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.209 Malloc1 00:29:11.471 [2024-05-15 10:22:57.006163] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:11.471 [2024-05-15 10:22:57.006388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.471 Malloc2 00:29:11.471 Malloc3 00:29:11.471 Malloc4 00:29:11.471 Malloc5 00:29:11.471 Malloc6 00:29:11.471 Malloc7 00:29:11.735 Malloc8 00:29:11.735 Malloc9 00:29:11.735 Malloc10 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2961339 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2961339 /var/tmp/bdevperf.sock 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # '[' -z 2961339 ']' 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:11.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:11.735 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 [2024-05-15 10:22:57.460676] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:11.736 [2024-05-15 10:22:57.460733] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.736 { 00:29:11.736 "params": { 00:29:11.736 "name": "Nvme$subsystem", 00:29:11.736 "trtype": "$TEST_TRANSPORT", 00:29:11.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.736 "adrfam": "ipv4", 00:29:11.736 "trsvcid": "$NVMF_PORT", 00:29:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.736 "hdgst": ${hdgst:-false}, 00:29:11.736 "ddgst": ${ddgst:-false} 00:29:11.736 }, 00:29:11.736 "method": "bdev_nvme_attach_controller" 00:29:11.736 } 00:29:11.736 EOF 00:29:11.736 )") 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.736 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.736 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:11.737 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:11.737 { 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme$subsystem", 00:29:11.737 "trtype": "$TEST_TRANSPORT", 00:29:11.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "$NVMF_PORT", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:11.737 "hdgst": ${hdgst:-false}, 00:29:11.737 "ddgst": ${ddgst:-false} 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 } 00:29:11.737 EOF 00:29:11.737 )") 00:29:11.737 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:11.737 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:11.737 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:11.737 10:22:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme1", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme2", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme3", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme4", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme5", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme6", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme7", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme8", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme9", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 },{ 00:29:11.737 "params": { 00:29:11.737 "name": "Nvme10", 00:29:11.737 "trtype": "tcp", 00:29:11.737 "traddr": "10.0.0.2", 00:29:11.737 "adrfam": "ipv4", 00:29:11.737 "trsvcid": "4420", 00:29:11.737 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:11.737 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:11.737 "hdgst": false, 00:29:11.737 "ddgst": false 00:29:11.737 }, 00:29:11.737 "method": "bdev_nvme_attach_controller" 00:29:11.737 }' 00:29:11.737 [2024-05-15 10:22:57.521034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.999 [2024-05-15 10:22:57.552190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # return 0 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2961339 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:13.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2961339 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:13.389 10:22:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:14.339 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2961116 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 [2024-05-15 10:22:59.960008] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:14.340 [2024-05-15 10:22:59.960060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962003 ] 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:14.340 { 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme$subsystem", 00:29:14.340 "trtype": "$TEST_TRANSPORT", 00:29:14.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "$NVMF_PORT", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.340 "hdgst": ${hdgst:-false}, 00:29:14.340 "ddgst": ${ddgst:-false} 00:29:14.340 }, 00:29:14.340 "method": "bdev_nvme_attach_controller" 00:29:14.340 } 00:29:14.340 EOF 00:29:14.340 )") 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:14.340 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:14.340 10:22:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:14.340 "params": { 00:29:14.340 "name": "Nvme1", 00:29:14.340 "trtype": "tcp", 00:29:14.340 "traddr": "10.0.0.2", 00:29:14.340 "adrfam": "ipv4", 00:29:14.340 "trsvcid": "4420", 00:29:14.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme2", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme3", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme4", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme5", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme6", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme7", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme8", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme9", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 },{ 00:29:14.341 "params": { 00:29:14.341 "name": "Nvme10", 00:29:14.341 "trtype": "tcp", 00:29:14.341 "traddr": "10.0.0.2", 00:29:14.341 "adrfam": "ipv4", 00:29:14.341 "trsvcid": "4420", 00:29:14.341 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:14.341 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:14.341 "hdgst": false, 00:29:14.341 "ddgst": false 00:29:14.341 }, 00:29:14.341 "method": "bdev_nvme_attach_controller" 00:29:14.341 }' 00:29:14.341 [2024-05-15 10:23:00.023934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.341 [2024-05-15 10:23:00.056831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.780 Running I/O for 1 seconds... 00:29:17.169 00:29:17.169 Latency(us) 00:29:17.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.169 Verification LBA range: start 0x0 length 0x400 00:29:17.169 Nvme1n1 : 1.17 218.64 13.66 0.00 0.00 289704.53 24248.32 258648.75 00:29:17.169 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.169 Verification LBA range: start 0x0 length 0x400 00:29:17.169 Nvme2n1 : 1.10 233.38 14.59 0.00 0.00 266455.89 25777.49 230686.72 00:29:17.169 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.169 Verification LBA range: start 0x0 length 0x400 00:29:17.169 Nvme3n1 : 1.06 240.85 15.05 0.00 0.00 252887.68 23483.73 237677.23 00:29:17.169 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.169 Verification LBA range: start 0x0 length 0x400 00:29:17.169 Nvme4n1 : 1.06 240.64 15.04 0.00 0.00 248639.79 22937.60 242920.11 00:29:17.169 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.169 Verification LBA range: start 0x0 length 0x400 00:29:17.169 Nvme5n1 : 1.11 230.34 14.40 0.00 0.00 255769.39 23483.73 260396.37 00:29:17.169 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.170 Verification LBA range: start 0x0 length 0x400 00:29:17.170 Nvme6n1 : 1.16 221.06 13.82 0.00 0.00 262518.19 23702.19 260396.37 00:29:17.170 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.170 Verification LBA range: start 0x0 length 0x400 00:29:17.170 Nvme7n1 : 1.23 258.89 16.18 0.00 0.00 221802.38 17476.27 281367.89 00:29:17.170 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.170 Verification LBA range: start 0x0 length 0x400 00:29:17.170 Nvme8n1 : 1.23 207.61 12.98 0.00 0.00 271764.69 21736.11 323310.93 00:29:17.170 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.170 Verification LBA range: start 0x0 length 0x400 00:29:17.170 Nvme9n1 : 1.19 215.17 13.45 0.00 0.00 256613.76 25012.91 265639.25 00:29:17.170 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:17.170 Verification LBA range: start 0x0 length 0x400 00:29:17.170 Nvme10n1 : 1.21 211.21 13.20 0.00 0.00 257508.05 23920.64 284863.15 00:29:17.170 =================================================================================================================== 00:29:17.170 Total : 2277.79 142.36 0.00 0.00 257488.23 17476.27 323310.93 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.170 rmmod nvme_tcp 00:29:17.170 rmmod nvme_fabrics 00:29:17.170 rmmod nvme_keyring 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2961116 ']' 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2961116 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' -z 2961116 ']' 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # kill -0 2961116 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # uname 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2961116 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2961116' 00:29:17.170 killing process with pid 2961116 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # kill 2961116 00:29:17.170 [2024-05-15 10:23:02.946479] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:17.170 10:23:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # wait 2961116 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.432 10:23:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.984 00:29:19.984 real 0m16.681s 00:29:19.984 user 0m33.774s 00:29:19.984 sys 0m6.758s 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.984 ************************************ 00:29:19.984 END TEST nvmf_shutdown_tc1 00:29:19.984 ************************************ 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:19.984 ************************************ 00:29:19.984 START TEST nvmf_shutdown_tc2 00:29:19.984 ************************************ 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc2 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:19.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:19.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:19.984 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:19.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:19.985 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:29:19.985 00:29:19.985 --- 10.0.0.2 ping statistics --- 00:29:19.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.985 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:29:19.985 00:29:19.985 --- 10.0.0.1 ping statistics --- 00:29:19.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.985 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2963122 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2963122 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2963122 ']' 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:19.985 10:23:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.247 [2024-05-15 10:23:05.791008] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:20.247 [2024-05-15 10:23:05.791073] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.247 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.247 [2024-05-15 10:23:05.876443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:20.247 [2024-05-15 10:23:05.910260] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.247 [2024-05-15 10:23:05.910299] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.247 [2024-05-15 10:23:05.910305] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.247 [2024-05-15 10:23:05.910310] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.247 [2024-05-15 10:23:05.910314] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.247 [2024-05-15 10:23:05.910451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.247 [2024-05-15 10:23:05.910689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.247 [2024-05-15 10:23:05.910850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.247 [2024-05-15 10:23:05.910850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:20.820 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:20.820 [2024-05-15 10:23:06.611710] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:21.082 10:23:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.082 Malloc1 00:29:21.082 [2024-05-15 10:23:06.710321] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:21.082 [2024-05-15 10:23:06.710526] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.082 Malloc2 00:29:21.082 Malloc3 00:29:21.082 Malloc4 00:29:21.082 Malloc5 00:29:21.344 Malloc6 00:29:21.344 Malloc7 00:29:21.344 Malloc8 00:29:21.344 Malloc9 00:29:21.344 Malloc10 00:29:21.344 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:21.344 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:21.344 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:21.344 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.344 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2963503 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2963503 /var/tmp/bdevperf.sock 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # '[' -z 2963503 ']' 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:21.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.345 { 00:29:21.345 "params": { 00:29:21.345 "name": "Nvme$subsystem", 00:29:21.345 "trtype": "$TEST_TRANSPORT", 00:29:21.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.345 "adrfam": "ipv4", 00:29:21.345 "trsvcid": "$NVMF_PORT", 00:29:21.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.345 "hdgst": ${hdgst:-false}, 00:29:21.345 "ddgst": ${ddgst:-false} 00:29:21.345 }, 00:29:21.345 "method": "bdev_nvme_attach_controller" 00:29:21.345 } 00:29:21.345 EOF 00:29:21.345 )") 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.345 { 00:29:21.345 "params": { 00:29:21.345 "name": "Nvme$subsystem", 00:29:21.345 "trtype": "$TEST_TRANSPORT", 00:29:21.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.345 "adrfam": "ipv4", 00:29:21.345 "trsvcid": "$NVMF_PORT", 00:29:21.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.345 "hdgst": ${hdgst:-false}, 00:29:21.345 "ddgst": ${ddgst:-false} 00:29:21.345 }, 00:29:21.345 "method": "bdev_nvme_attach_controller" 00:29:21.345 } 00:29:21.345 EOF 00:29:21.345 )") 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.345 { 00:29:21.345 "params": { 00:29:21.345 "name": "Nvme$subsystem", 00:29:21.345 "trtype": "$TEST_TRANSPORT", 00:29:21.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.345 "adrfam": "ipv4", 00:29:21.345 "trsvcid": "$NVMF_PORT", 00:29:21.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.345 "hdgst": ${hdgst:-false}, 00:29:21.345 "ddgst": ${ddgst:-false} 00:29:21.345 }, 00:29:21.345 "method": "bdev_nvme_attach_controller" 00:29:21.345 } 00:29:21.345 EOF 00:29:21.345 )") 00:29:21.345 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.607 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.607 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 [2024-05-15 10:23:07.159802] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:21.608 [2024-05-15 10:23:07.159854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2963503 ] 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.608 { 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme$subsystem", 00:29:21.608 "trtype": "$TEST_TRANSPORT", 00:29:21.608 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "$NVMF_PORT", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.608 "hdgst": ${hdgst:-false}, 00:29:21.608 "ddgst": ${ddgst:-false} 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 } 00:29:21.608 EOF 00:29:21.608 )") 00:29:21.608 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:21.608 10:23:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme1", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme2", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme3", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme4", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme5", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme6", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.608 "name": "Nvme7", 00:29:21.608 "trtype": "tcp", 00:29:21.608 "traddr": "10.0.0.2", 00:29:21.608 "adrfam": "ipv4", 00:29:21.608 "trsvcid": "4420", 00:29:21.608 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:21.608 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:21.608 "hdgst": false, 00:29:21.608 "ddgst": false 00:29:21.608 }, 00:29:21.608 "method": "bdev_nvme_attach_controller" 00:29:21.608 },{ 00:29:21.608 "params": { 00:29:21.609 "name": "Nvme8", 00:29:21.609 "trtype": "tcp", 00:29:21.609 "traddr": "10.0.0.2", 00:29:21.609 "adrfam": "ipv4", 00:29:21.609 "trsvcid": "4420", 00:29:21.609 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:21.609 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:21.609 "hdgst": false, 00:29:21.609 "ddgst": false 00:29:21.609 }, 00:29:21.609 "method": "bdev_nvme_attach_controller" 00:29:21.609 },{ 00:29:21.609 "params": { 00:29:21.609 "name": "Nvme9", 00:29:21.609 "trtype": "tcp", 00:29:21.609 "traddr": "10.0.0.2", 00:29:21.609 "adrfam": "ipv4", 00:29:21.609 "trsvcid": "4420", 00:29:21.609 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:21.609 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:21.609 "hdgst": false, 00:29:21.609 "ddgst": false 00:29:21.609 }, 00:29:21.609 "method": "bdev_nvme_attach_controller" 00:29:21.609 },{ 00:29:21.609 "params": { 00:29:21.609 "name": "Nvme10", 00:29:21.609 "trtype": "tcp", 00:29:21.609 "traddr": "10.0.0.2", 00:29:21.609 "adrfam": "ipv4", 00:29:21.609 "trsvcid": "4420", 00:29:21.609 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:21.609 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:21.609 "hdgst": false, 00:29:21.609 "ddgst": false 00:29:21.609 }, 00:29:21.609 "method": "bdev_nvme_attach_controller" 00:29:21.609 }' 00:29:21.609 [2024-05-15 10:23:07.219564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.609 [2024-05-15 10:23:07.250705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.532 Running I/O for 10 seconds... 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # return 0 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=81 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 81 -ge 100 ']' 00:29:24.107 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:24.370 10:23:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2963503 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2963503 ']' 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2963503 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2963503 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2963503' 00:29:24.370 killing process with pid 2963503 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2963503 00:29:24.370 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2963503 00:29:24.633 Received shutdown signal, test time was about 1.149404 seconds 00:29:24.633 00:29:24.633 Latency(us) 00:29:24.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.633 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme1n1 : 1.15 222.90 13.93 0.00 0.00 274538.67 45001.39 270882.13 00:29:24.633 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme2n1 : 1.09 176.95 11.06 0.00 0.00 350687.00 24466.77 304087.04 00:29:24.633 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme3n1 : 1.09 175.48 10.97 0.00 0.00 348094.01 45219.84 337291.95 00:29:24.633 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme4n1 : 1.10 174.17 10.89 0.00 0.00 343653.83 32112.64 346030.08 00:29:24.633 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme5n1 : 1.07 178.96 11.19 0.00 0.00 326252.94 24794.45 304087.04 00:29:24.633 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme6n1 : 1.10 174.02 10.88 0.00 0.00 331043.84 29928.11 356515.84 00:29:24.633 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme7n1 : 1.09 454.43 28.40 0.00 0.00 122941.45 18022.40 169519.79 00:29:24.633 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme8n1 : 1.12 228.92 14.31 0.00 0.00 242565.55 24576.00 298844.16 00:29:24.633 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme9n1 : 1.10 232.45 14.53 0.00 0.00 233064.53 24029.87 293601.28 00:29:24.633 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:24.633 Verification LBA range: start 0x0 length 0x400 00:29:24.633 Nvme10n1 : 1.08 177.11 11.07 0.00 0.00 299297.00 37137.07 290106.03 00:29:24.633 =================================================================================================================== 00:29:24.633 Total : 2195.38 137.21 0.00 0.00 263649.16 18022.40 356515.84 00:29:24.633 10:23:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:25.581 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2963122 00:29:25.581 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:25.582 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:25.582 rmmod nvme_tcp 00:29:25.582 rmmod nvme_fabrics 00:29:25.843 rmmod nvme_keyring 00:29:25.843 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2963122 ']' 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2963122 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' -z 2963122 ']' 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # kill -0 2963122 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # uname 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2963122 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2963122' 00:29:25.844 killing process with pid 2963122 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # kill 2963122 00:29:25.844 [2024-05-15 10:23:11.472983] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:25.844 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # wait 2963122 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:26.106 10:23:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.022 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:28.022 00:29:28.022 real 0m8.442s 00:29:28.022 user 0m26.202s 00:29:28.022 sys 0m1.418s 00:29:28.022 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:28.022 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.022 ************************************ 00:29:28.022 END TEST nvmf_shutdown_tc2 00:29:28.022 ************************************ 00:29:28.022 10:23:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:28.022 10:23:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:28.023 10:23:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:28.023 10:23:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:28.285 ************************************ 00:29:28.285 START TEST nvmf_shutdown_tc3 00:29:28.285 ************************************ 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # nvmf_shutdown_tc3 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:28.285 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:28.286 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:28.286 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:28.286 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:28.286 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.286 10:23:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.286 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.286 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.286 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:28.286 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:28.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:29:28.548 00:29:28.548 --- 10.0.0.2 ping statistics --- 00:29:28.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.548 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.500 ms 00:29:28.548 00:29:28.548 --- 10.0.0.1 ping statistics --- 00:29:28.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.548 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2964973 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2964973 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2964973 ']' 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:28.548 10:23:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:28.548 [2024-05-15 10:23:14.323822] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:28.548 [2024-05-15 10:23:14.323871] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.810 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.810 [2024-05-15 10:23:14.407151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.810 [2024-05-15 10:23:14.437458] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.810 [2024-05-15 10:23:14.437490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.810 [2024-05-15 10:23:14.437496] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.810 [2024-05-15 10:23:14.437500] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.810 [2024-05-15 10:23:14.437504] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.810 [2024-05-15 10:23:14.437621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.810 [2024-05-15 10:23:14.437778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.810 [2024-05-15 10:23:14.437931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.810 [2024-05-15 10:23:14.437934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:29.383 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:29.383 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:29:29.383 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:29.383 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.384 [2024-05-15 10:23:15.126496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.384 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:29.646 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.646 Malloc1 00:29:29.646 [2024-05-15 10:23:15.225201] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:29.646 [2024-05-15 10:23:15.225398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.646 Malloc2 00:29:29.646 Malloc3 00:29:29.646 Malloc4 00:29:29.646 Malloc5 00:29:29.646 Malloc6 00:29:29.646 Malloc7 00:29:29.908 Malloc8 00:29:29.908 Malloc9 00:29:29.908 Malloc10 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2965202 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2965202 /var/tmp/bdevperf.sock 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # '[' -z 2965202 ']' 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:29.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.908 { 00:29:29.908 "params": { 00:29:29.908 "name": "Nvme$subsystem", 00:29:29.908 "trtype": "$TEST_TRANSPORT", 00:29:29.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.908 "adrfam": "ipv4", 00:29:29.908 "trsvcid": "$NVMF_PORT", 00:29:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.908 "hdgst": ${hdgst:-false}, 00:29:29.908 "ddgst": ${ddgst:-false} 00:29:29.908 }, 00:29:29.908 "method": "bdev_nvme_attach_controller" 00:29:29.908 } 00:29:29.908 EOF 00:29:29.908 )") 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.908 { 00:29:29.908 "params": { 00:29:29.908 "name": "Nvme$subsystem", 00:29:29.908 "trtype": "$TEST_TRANSPORT", 00:29:29.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.908 "adrfam": "ipv4", 00:29:29.908 "trsvcid": "$NVMF_PORT", 00:29:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.908 "hdgst": ${hdgst:-false}, 00:29:29.908 "ddgst": ${ddgst:-false} 00:29:29.908 }, 00:29:29.908 "method": "bdev_nvme_attach_controller" 00:29:29.908 } 00:29:29.908 EOF 00:29:29.908 )") 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.908 { 00:29:29.908 "params": { 00:29:29.908 "name": "Nvme$subsystem", 00:29:29.908 "trtype": "$TEST_TRANSPORT", 00:29:29.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.908 "adrfam": "ipv4", 00:29:29.908 "trsvcid": "$NVMF_PORT", 00:29:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.908 "hdgst": ${hdgst:-false}, 00:29:29.908 "ddgst": ${ddgst:-false} 00:29:29.908 }, 00:29:29.908 "method": "bdev_nvme_attach_controller" 00:29:29.908 } 00:29:29.908 EOF 00:29:29.908 )") 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.908 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.908 { 00:29:29.908 "params": { 00:29:29.908 "name": "Nvme$subsystem", 00:29:29.908 "trtype": "$TEST_TRANSPORT", 00:29:29.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.908 "adrfam": "ipv4", 00:29:29.908 "trsvcid": "$NVMF_PORT", 00:29:29.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.908 "hdgst": ${hdgst:-false}, 00:29:29.908 "ddgst": ${ddgst:-false} 00:29:29.908 }, 00:29:29.908 "method": "bdev_nvme_attach_controller" 00:29:29.908 } 00:29:29.908 EOF 00:29:29.908 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.909 { 00:29:29.909 "params": { 00:29:29.909 "name": "Nvme$subsystem", 00:29:29.909 "trtype": "$TEST_TRANSPORT", 00:29:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.909 "adrfam": "ipv4", 00:29:29.909 "trsvcid": "$NVMF_PORT", 00:29:29.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.909 "hdgst": ${hdgst:-false}, 00:29:29.909 "ddgst": ${ddgst:-false} 00:29:29.909 }, 00:29:29.909 "method": "bdev_nvme_attach_controller" 00:29:29.909 } 00:29:29.909 EOF 00:29:29.909 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.909 { 00:29:29.909 "params": { 00:29:29.909 "name": "Nvme$subsystem", 00:29:29.909 "trtype": "$TEST_TRANSPORT", 00:29:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.909 "adrfam": "ipv4", 00:29:29.909 "trsvcid": "$NVMF_PORT", 00:29:29.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.909 "hdgst": ${hdgst:-false}, 00:29:29.909 "ddgst": ${ddgst:-false} 00:29:29.909 }, 00:29:29.909 "method": "bdev_nvme_attach_controller" 00:29:29.909 } 00:29:29.909 EOF 00:29:29.909 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.909 { 00:29:29.909 "params": { 00:29:29.909 "name": "Nvme$subsystem", 00:29:29.909 "trtype": "$TEST_TRANSPORT", 00:29:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.909 "adrfam": "ipv4", 00:29:29.909 "trsvcid": "$NVMF_PORT", 00:29:29.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.909 "hdgst": ${hdgst:-false}, 00:29:29.909 "ddgst": ${ddgst:-false} 00:29:29.909 }, 00:29:29.909 "method": "bdev_nvme_attach_controller" 00:29:29.909 } 00:29:29.909 EOF 00:29:29.909 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.909 [2024-05-15 10:23:15.678691] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:29.909 [2024-05-15 10:23:15.678742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2965202 ] 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.909 { 00:29:29.909 "params": { 00:29:29.909 "name": "Nvme$subsystem", 00:29:29.909 "trtype": "$TEST_TRANSPORT", 00:29:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.909 "adrfam": "ipv4", 00:29:29.909 "trsvcid": "$NVMF_PORT", 00:29:29.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.909 "hdgst": ${hdgst:-false}, 00:29:29.909 "ddgst": ${ddgst:-false} 00:29:29.909 }, 00:29:29.909 "method": "bdev_nvme_attach_controller" 00:29:29.909 } 00:29:29.909 EOF 00:29:29.909 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.909 { 00:29:29.909 "params": { 00:29:29.909 "name": "Nvme$subsystem", 00:29:29.909 "trtype": "$TEST_TRANSPORT", 00:29:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.909 "adrfam": "ipv4", 00:29:29.909 "trsvcid": "$NVMF_PORT", 00:29:29.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.909 "hdgst": ${hdgst:-false}, 00:29:29.909 "ddgst": ${ddgst:-false} 00:29:29.909 }, 00:29:29.909 "method": "bdev_nvme_attach_controller" 00:29:29.909 } 00:29:29.909 EOF 00:29:29.909 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:29.909 { 00:29:29.909 "params": { 00:29:29.909 "name": "Nvme$subsystem", 00:29:29.909 "trtype": "$TEST_TRANSPORT", 00:29:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:29.909 "adrfam": "ipv4", 00:29:29.909 "trsvcid": "$NVMF_PORT", 00:29:29.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:29.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:29.909 "hdgst": ${hdgst:-false}, 00:29:29.909 "ddgst": ${ddgst:-false} 00:29:29.909 }, 00:29:29.909 "method": "bdev_nvme_attach_controller" 00:29:29.909 } 00:29:29.909 EOF 00:29:29.909 )") 00:29:29.909 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:30.171 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:30.171 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.171 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:30.171 10:23:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme1", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme2", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme3", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme4", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme5", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme6", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme7", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme8", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme9", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 },{ 00:29:30.171 "params": { 00:29:30.171 "name": "Nvme10", 00:29:30.171 "trtype": "tcp", 00:29:30.171 "traddr": "10.0.0.2", 00:29:30.171 "adrfam": "ipv4", 00:29:30.171 "trsvcid": "4420", 00:29:30.171 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:30.171 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:30.171 "hdgst": false, 00:29:30.171 "ddgst": false 00:29:30.171 }, 00:29:30.171 "method": "bdev_nvme_attach_controller" 00:29:30.171 }' 00:29:30.171 [2024-05-15 10:23:15.738670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.171 [2024-05-15 10:23:15.769864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.090 Running I/O for 10 seconds... 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # return 0 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:32.090 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:32.091 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:32.368 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:32.368 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:32.368 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:32.368 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:32.368 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:32.368 10:23:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=125 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 125 -ge 100 ']' 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2964973 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' -z 2964973 ']' 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # kill -0 2964973 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # uname 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2964973 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2964973' 00:29:32.368 killing process with pid 2964973 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # kill 2964973 00:29:32.368 [2024-05-15 10:23:18.074777] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:32.368 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # wait 2964973 00:29:32.368 [2024-05-15 10:23:18.075559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.368 [2024-05-15 10:23:18.075806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075864] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.075873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6a7d0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.076801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68290 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.077562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68730 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.077896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd68bd0 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.369 [2024-05-15 10:23:18.078378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.078639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69070 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.370 [2024-05-15 10:23:18.079340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.079491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd69510 is same with the state(5) to be set 00:29:32.371 [2024-05-15 10:23:18.082462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.371 [2024-05-15 10:23:18.082906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.371 [2024-05-15 10:23:18.082914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.082923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.082930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.082940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.082946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.082963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.082972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.082979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.082988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.082995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.372 [2024-05-15 10:23:18.083449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.372 [2024-05-15 10:23:18.083458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.373 [2024-05-15 10:23:18.083579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.083609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:32.373 [2024-05-15 10:23:18.083653] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f61a80 was disconnected and freed. reset controller. 00:29:32.373 [2024-05-15 10:23:18.084815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe820 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.084924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.084979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.084987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede7d0 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.085013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6870 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.085099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee2c90 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.085187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede9e0 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.085274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fa610 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.085364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6410 is same with the state(5) to be set 00:29:32.373 [2024-05-15 10:23:18.085454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.373 [2024-05-15 10:23:18.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.373 [2024-05-15 10:23:18.085470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29cf0 is same with the state(5) to be set 00:29:32.374 [2024-05-15 10:23:18.085538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc6b0 is same with the state(5) to be set 00:29:32.374 [2024-05-15 10:23:18.085621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:32.374 [2024-05-15 10:23:18.085677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.085684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e219b0 is same with the state(5) to be set 00:29:32.374 [2024-05-15 10:23:18.086105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.374 [2024-05-15 10:23:18.086609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.374 [2024-05-15 10:23:18.086617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.086626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.096713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.096787] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f608a0 was disconnected and freed. reset controller. 00:29:32.375 [2024-05-15 10:23:18.098247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.098267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.098283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.098296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.098307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.098314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.098324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.098332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.098342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.375 [2024-05-15 10:23:18.098349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.375 [2024-05-15 10:23:18.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.098988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.098998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.376 [2024-05-15 10:23:18.099005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.376 [2024-05-15 10:23:18.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099428] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1df8020 was disconnected and freed. reset controller. 00:29:32.377 [2024-05-15 10:23:18.099505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.377 [2024-05-15 10:23:18.099764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.377 [2024-05-15 10:23:18.099771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.099988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.099996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.378 [2024-05-15 10:23:18.100470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.378 [2024-05-15 10:23:18.100477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100643] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1df9520 was disconnected and freed. reset controller. 00:29:32.379 [2024-05-15 10:23:18.100717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.100985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.100994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.101002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.101012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.101021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.101031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.101039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.101049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.379 [2024-05-15 10:23:18.106320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.379 [2024-05-15 10:23:18.106329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.106862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.106932] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dfa880 was disconnected and freed. reset controller. 00:29:32.380 [2024-05-15 10:23:18.107041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:32.380 [2024-05-15 10:23:18.107067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e219b0 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfe820 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede7d0 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6870 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107152] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee2c90 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede9e0 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fa610 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6410 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29cf0 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfc6b0 (9): Bad file descriptor 00:29:32.380 [2024-05-15 10:23:18.107321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.107331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.107343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.107352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.107362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.107370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.107380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.107387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.380 [2024-05-15 10:23:18.107397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.380 [2024-05-15 10:23:18.107405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.381 [2024-05-15 10:23:18.107987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.381 [2024-05-15 10:23:18.107996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.382 [2024-05-15 10:23:18.108425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.382 [2024-05-15 10:23:18.108478] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eac750 was disconnected and freed. reset controller. 00:29:32.382 [2024-05-15 10:23:18.115016] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.382 [2024-05-15 10:23:18.115130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:32.382 [2024-05-15 10:23:18.115764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.116499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.116538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e219b0 with addr=10.0.0.2, port=4420 00:29:32.382 [2024-05-15 10:23:18.116552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e219b0 is same with the state(5) to be set 00:29:32.382 [2024-05-15 10:23:18.116948] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.382 [2024-05-15 10:23:18.117774] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.382 [2024-05-15 10:23:18.117794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:32.382 [2024-05-15 10:23:18.117806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:32.382 [2024-05-15 10:23:18.117815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:32.382 [2024-05-15 10:23:18.117825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.382 [2024-05-15 10:23:18.118545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.119134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.119148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e29cf0 with addr=10.0.0.2, port=4420 00:29:32.382 [2024-05-15 10:23:18.119158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29cf0 is same with the state(5) to be set 00:29:32.382 [2024-05-15 10:23:18.119175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e219b0 (9): Bad file descriptor 00:29:32.382 [2024-05-15 10:23:18.119547] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:32.382 [2024-05-15 10:23:18.120184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.120629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.120668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fa610 with addr=10.0.0.2, port=4420 00:29:32.382 [2024-05-15 10:23:18.120679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fa610 is same with the state(5) to be set 00:29:32.382 [2024-05-15 10:23:18.121302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.121998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.122036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ede9e0 with addr=10.0.0.2, port=4420 00:29:32.382 [2024-05-15 10:23:18.122048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede9e0 is same with the state(5) to be set 00:29:32.382 [2024-05-15 10:23:18.122701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.123506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.123544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ede7d0 with addr=10.0.0.2, port=4420 00:29:32.382 [2024-05-15 10:23:18.123556] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ede7d0 is same with the state(5) to be set 00:29:32.382 [2024-05-15 10:23:18.124168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.124696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.382 [2024-05-15 10:23:18.124734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfe820 with addr=10.0.0.2, port=4420 00:29:32.382 [2024-05-15 10:23:18.124746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfe820 is same with the state(5) to be set 00:29:32.382 [2024-05-15 10:23:18.124763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29cf0 (9): Bad file descriptor 00:29:32.382 [2024-05-15 10:23:18.124775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:32.382 [2024-05-15 10:23:18.124782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:32.382 [2024-05-15 10:23:18.124791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:32.382 [2024-05-15 10:23:18.124868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.124899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.124917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.124934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.124951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.124975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.124992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.124999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.383 [2024-05-15 10:23:18.125582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.383 [2024-05-15 10:23:18.125591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.125973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.125981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eadae0 is same with the state(5) to be set 00:29:32.384 [2024-05-15 10:23:18.127283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.384 [2024-05-15 10:23:18.127605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.384 [2024-05-15 10:23:18.127613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.127983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.127992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.385 [2024-05-15 10:23:18.128196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.385 [2024-05-15 10:23:18.128204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.128394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.128402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62f10 is same with the state(5) to be set 00:29:32.386 [2024-05-15 10:23:18.129673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.129988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.129995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.386 [2024-05-15 10:23:18.130195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.386 [2024-05-15 10:23:18.130203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.130785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.130795] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df6b20 is same with the state(5) to be set 00:29:32.387 [2024-05-15 10:23:18.132347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.132365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.132378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.132385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.132395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.132403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.132412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.132420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.132430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.132438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.132447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.387 [2024-05-15 10:23:18.132454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.387 [2024-05-15 10:23:18.132464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.132989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.388 [2024-05-15 10:23:18.133158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.388 [2024-05-15 10:23:18.133168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.389 [2024-05-15 10:23:18.133455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:32.389 [2024-05-15 10:23:18.133463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea50a0 is same with the state(5) to be set 00:29:32.389 [2024-05-15 10:23:18.135208] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.389 [2024-05-15 10:23:18.135227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:32.389 [2024-05-15 10:23:18.135239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:32.389 [2024-05-15 10:23:18.135249] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:32.652 task offset: 18688 on job bdev=Nvme4n1 fails 00:29:32.652 00:29:32.652 Latency(us) 00:29:32.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.652 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme1n1 ended in about 0.73 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme1n1 : 0.73 174.83 10.93 87.41 0.00 240616.39 27634.35 246415.36 00:29:32.652 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme2n1 ended in about 0.74 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme2n1 : 0.74 85.94 5.37 85.94 0.00 357800.53 25012.91 276125.01 00:29:32.652 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme3n1 ended in about 0.73 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme3n1 : 0.73 173.26 10.83 88.00 0.00 228687.01 23483.73 258648.75 00:29:32.652 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme4n1 ended in about 0.72 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme4n1 : 0.72 178.88 11.18 89.44 0.00 216061.16 14636.37 253405.87 00:29:32.652 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme5n1 ended in about 0.75 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme5n1 : 0.75 85.67 5.35 85.67 0.00 330568.53 44782.93 279620.27 00:29:32.652 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme6n1 ended in about 0.75 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme6n1 : 0.75 85.39 5.34 85.39 0.00 322257.07 29709.65 274377.39 00:29:32.652 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme7n1 ended in about 0.73 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme7n1 : 0.73 175.69 10.98 87.85 0.00 201397.48 24139.09 246415.36 00:29:32.652 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme8n1 ended in about 0.73 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme8n1 : 0.73 175.41 10.96 87.70 0.00 195456.85 25995.95 202724.69 00:29:32.652 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme9n1 ended in about 0.73 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme9n1 : 0.73 175.13 10.95 87.57 0.00 189635.41 24903.68 228939.09 00:29:32.652 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:32.652 Job: Nvme10n1 ended in about 0.75 seconds with error 00:29:32.652 Verification LBA range: start 0x0 length 0x400 00:29:32.652 Nvme10n1 : 0.75 85.09 5.32 85.09 0.00 285921.28 24248.32 293601.28 00:29:32.652 =================================================================================================================== 00:29:32.652 Total : 1395.29 87.21 870.07 0.00 246508.26 14636.37 293601.28 00:29:32.652 [2024-05-15 10:23:18.164106] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:32.652 [2024-05-15 10:23:18.164177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fa610 (9): Bad file descriptor 00:29:32.652 [2024-05-15 10:23:18.164192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede9e0 (9): Bad file descriptor 00:29:32.652 [2024-05-15 10:23:18.164203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ede7d0 (9): Bad file descriptor 00:29:32.652 [2024-05-15 10:23:18.164213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfe820 (9): Bad file descriptor 00:29:32.652 [2024-05-15 10:23:18.164222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:32.652 [2024-05-15 10:23:18.164229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:32.652 [2024-05-15 10:23:18.164237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:32.653 [2024-05-15 10:23:18.164278] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.164298] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.164314] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.164324] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.164335] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.164345] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.164428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:32.653 [2024-05-15 10:23:18.164451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.165137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.165573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.165586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dfc6b0 with addr=10.0.0.2, port=4420 00:29:32.653 [2024-05-15 10:23:18.165596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dfc6b0 is same with the state(5) to be set 00:29:32.653 [2024-05-15 10:23:18.165886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.166490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.166501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc6410 with addr=10.0.0.2, port=4420 00:29:32.653 [2024-05-15 10:23:18.166508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc6410 is same with the state(5) to be set 00:29:32.653 [2024-05-15 10:23:18.166758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.167355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.167366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee2c90 with addr=10.0.0.2, port=4420 00:29:32.653 [2024-05-15 10:23:18.167377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee2c90 is same with the state(5) to be set 00:29:32.653 [2024-05-15 10:23:18.167385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.167392] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.167399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:32.653 [2024-05-15 10:23:18.167413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.167419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.167426] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:32.653 [2024-05-15 10:23:18.167436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.167443] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.167460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:32.653 [2024-05-15 10:23:18.167472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.167479] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.167485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:32.653 [2024-05-15 10:23:18.167512] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.167532] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.167544] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.167555] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.167567] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:32.653 [2024-05-15 10:23:18.168607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:32.653 [2024-05-15 10:23:18.168634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.168642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.168648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.168655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.169271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.169833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.169845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb6870 with addr=10.0.0.2, port=4420 00:29:32.653 [2024-05-15 10:23:18.169852] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb6870 is same with the state(5) to be set 00:29:32.653 [2024-05-15 10:23:18.169863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dfc6b0 (9): Bad file descriptor 00:29:32.653 [2024-05-15 10:23:18.169874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6410 (9): Bad file descriptor 00:29:32.653 [2024-05-15 10:23:18.169883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee2c90 (9): Bad file descriptor 00:29:32.653 [2024-05-15 10:23:18.169955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:32.653 [2024-05-15 10:23:18.170549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.171158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.171168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e219b0 with addr=10.0.0.2, port=4420 00:29:32.653 [2024-05-15 10:23:18.171176] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e219b0 is same with the state(5) to be set 00:29:32.653 [2024-05-15 10:23:18.171184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb6870 (9): Bad file descriptor 00:29:32.653 [2024-05-15 10:23:18.171193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.171199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.171206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:32.653 [2024-05-15 10:23:18.171216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.171223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.171229] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:32.653 [2024-05-15 10:23:18.171239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.171246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.171252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:32.653 [2024-05-15 10:23:18.171311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.171320] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.171326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.171902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.172598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.653 [2024-05-15 10:23:18.172641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e29cf0 with addr=10.0.0.2, port=4420 00:29:32.653 [2024-05-15 10:23:18.172653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e29cf0 is same with the state(5) to be set 00:29:32.653 [2024-05-15 10:23:18.172669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e219b0 (9): Bad file descriptor 00:29:32.653 [2024-05-15 10:23:18.172679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.172686] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.172693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:32.653 [2024-05-15 10:23:18.172736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.172745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e29cf0 (9): Bad file descriptor 00:29:32.653 [2024-05-15 10:23:18.172753] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.172761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.172767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:32.653 [2024-05-15 10:23:18.172816] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 [2024-05-15 10:23:18.172825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:32.653 [2024-05-15 10:23:18.172832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:32.653 [2024-05-15 10:23:18.172839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:32.653 [2024-05-15 10:23:18.172867] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:32.653 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:32.653 10:23:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2965202 00:29:33.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2965202) - No such process 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:33.601 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:33.601 rmmod nvme_tcp 00:29:33.601 rmmod nvme_fabrics 00:29:33.601 rmmod nvme_keyring 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.897 10:23:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:35.816 10:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:35.816 00:29:35.816 real 0m7.629s 00:29:35.816 user 0m18.159s 00:29:35.816 sys 0m1.228s 00:29:35.816 10:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:35.816 10:23:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:35.816 ************************************ 00:29:35.816 END TEST nvmf_shutdown_tc3 00:29:35.816 ************************************ 00:29:35.816 10:23:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:35.816 00:29:35.816 real 0m33.160s 00:29:35.816 user 1m18.291s 00:29:35.816 sys 0m9.670s 00:29:35.816 10:23:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:35.816 10:23:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:35.816 ************************************ 00:29:35.816 END TEST nvmf_shutdown 00:29:35.816 ************************************ 00:29:35.816 10:23:21 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:35.816 10:23:21 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:35.816 10:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.079 10:23:21 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:36.079 10:23:21 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:36.079 10:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.079 10:23:21 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:36.079 10:23:21 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:36.079 10:23:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:36.079 10:23:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:36.079 10:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.079 ************************************ 00:29:36.079 START TEST nvmf_multicontroller 00:29:36.079 ************************************ 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:36.079 * Looking for test storage... 00:29:36.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:36.079 10:23:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:44.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:44.237 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:44.237 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:44.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:44.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:44.238 10:23:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:44.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:29:44.238 00:29:44.238 --- 10.0.0.2 ping statistics --- 00:29:44.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.238 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.482 ms 00:29:44.238 00:29:44.238 --- 10.0.0.1 ping statistics --- 00:29:44.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.238 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2970079 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2970079 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2970079 ']' 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 [2024-05-15 10:23:29.186014] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:44.238 [2024-05-15 10:23:29.186082] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.238 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.238 [2024-05-15 10:23:29.273602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:44.238 [2024-05-15 10:23:29.321389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.238 [2024-05-15 10:23:29.321448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.238 [2024-05-15 10:23:29.321457] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.238 [2024-05-15 10:23:29.321465] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.238 [2024-05-15 10:23:29.321471] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.238 [2024-05-15 10:23:29.321595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.238 [2024-05-15 10:23:29.321761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.238 [2024-05-15 10:23:29.321761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:44.238 10:23:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.238 [2024-05-15 10:23:30.019535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.238 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.500 Malloc0 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.500 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 [2024-05-15 10:23:30.097122] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:44.501 [2024-05-15 10:23:30.097363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 [2024-05-15 10:23:30.109266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 Malloc1 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2970339 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2970339 /var/tmp/bdevperf.sock 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2970339 ']' 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:44.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:44.501 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.763 NVMe0n1 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:44.763 1 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.763 request: 00:29:44.763 { 00:29:44.763 "name": "NVMe0", 00:29:44.763 "trtype": "tcp", 00:29:44.763 "traddr": "10.0.0.2", 00:29:44.763 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:44.763 "hostaddr": "10.0.0.2", 00:29:44.763 "hostsvcid": "60000", 00:29:44.763 "adrfam": "ipv4", 00:29:44.763 "trsvcid": "4420", 00:29:44.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.763 "method": "bdev_nvme_attach_controller", 00:29:44.763 "req_id": 1 00:29:44.763 } 00:29:44.763 Got JSON-RPC error response 00:29:44.763 response: 00:29:44.763 { 00:29:44.763 "code": -114, 00:29:44.763 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:44.763 } 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:44.763 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:44.764 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:44.764 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:44.764 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:44.764 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.026 request: 00:29:45.026 { 00:29:45.026 "name": "NVMe0", 00:29:45.026 "trtype": "tcp", 00:29:45.026 "traddr": "10.0.0.2", 00:29:45.026 "hostaddr": "10.0.0.2", 00:29:45.026 "hostsvcid": "60000", 00:29:45.026 "adrfam": "ipv4", 00:29:45.026 "trsvcid": "4420", 00:29:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:45.026 "method": "bdev_nvme_attach_controller", 00:29:45.026 "req_id": 1 00:29:45.026 } 00:29:45.026 Got JSON-RPC error response 00:29:45.026 response: 00:29:45.026 { 00:29:45.026 "code": -114, 00:29:45.026 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:45.026 } 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.026 request: 00:29:45.026 { 00:29:45.026 "name": "NVMe0", 00:29:45.026 "trtype": "tcp", 00:29:45.026 "traddr": "10.0.0.2", 00:29:45.026 "hostaddr": "10.0.0.2", 00:29:45.026 "hostsvcid": "60000", 00:29:45.026 "adrfam": "ipv4", 00:29:45.026 "trsvcid": "4420", 00:29:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.026 "multipath": "disable", 00:29:45.026 "method": "bdev_nvme_attach_controller", 00:29:45.026 "req_id": 1 00:29:45.026 } 00:29:45.026 Got JSON-RPC error response 00:29:45.026 response: 00:29:45.026 { 00:29:45.026 "code": -114, 00:29:45.026 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:45.026 } 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.026 request: 00:29:45.026 { 00:29:45.026 "name": "NVMe0", 00:29:45.026 "trtype": "tcp", 00:29:45.026 "traddr": "10.0.0.2", 00:29:45.026 "hostaddr": "10.0.0.2", 00:29:45.026 "hostsvcid": "60000", 00:29:45.026 "adrfam": "ipv4", 00:29:45.026 "trsvcid": "4420", 00:29:45.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:45.026 "multipath": "failover", 00:29:45.026 "method": "bdev_nvme_attach_controller", 00:29:45.026 "req_id": 1 00:29:45.026 } 00:29:45.026 Got JSON-RPC error response 00:29:45.026 response: 00:29:45.026 { 00:29:45.026 "code": -114, 00:29:45.026 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:45.026 } 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.026 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.026 00:29:45.026 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.027 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:45.027 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.027 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:45.027 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.288 10:23:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.288 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:45.288 10:23:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:46.233 0 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2970339 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2970339 ']' 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2970339 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:46.233 10:23:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2970339 00:29:46.233 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:46.233 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:46.233 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2970339' 00:29:46.233 killing process with pid 2970339 00:29:46.233 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2970339 00:29:46.233 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2970339 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:29:46.496 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:46.496 [2024-05-15 10:23:30.223761] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:46.496 [2024-05-15 10:23:30.223816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970339 ] 00:29:46.496 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.496 [2024-05-15 10:23:30.282752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.496 [2024-05-15 10:23:30.313905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.496 [2024-05-15 10:23:30.805280] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 502c7a3a-cc11-4df4-8d70-0499aa7d153a already exists 00:29:46.496 [2024-05-15 10:23:30.805314] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:502c7a3a-cc11-4df4-8d70-0499aa7d153a alias for bdev NVMe1n1 00:29:46.496 [2024-05-15 10:23:30.805325] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:46.496 Running I/O for 1 seconds... 00:29:46.496 00:29:46.496 Latency(us) 00:29:46.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.496 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:46.496 NVMe0n1 : 1.00 19887.28 77.68 0.00 0.00 6419.39 4778.67 27634.35 00:29:46.496 =================================================================================================================== 00:29:46.496 Total : 19887.28 77.68 0.00 0.00 6419.39 4778.67 27634.35 00:29:46.496 Received shutdown signal, test time was about 1.000000 seconds 00:29:46.496 00:29:46.496 Latency(us) 00:29:46.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.496 =================================================================================================================== 00:29:46.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.496 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:46.496 rmmod nvme_tcp 00:29:46.496 rmmod nvme_fabrics 00:29:46.496 rmmod nvme_keyring 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2970079 ']' 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2970079 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2970079 ']' 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2970079 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:46.496 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2970079 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2970079' 00:29:46.758 killing process with pid 2970079 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2970079 00:29:46.758 [2024-05-15 10:23:32.301833] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2970079 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.758 10:23:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.311 10:23:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:49.311 00:29:49.311 real 0m12.849s 00:29:49.311 user 0m13.482s 00:29:49.311 sys 0m6.187s 00:29:49.311 10:23:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:49.311 10:23:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:49.311 ************************************ 00:29:49.311 END TEST nvmf_multicontroller 00:29:49.311 ************************************ 00:29:49.311 10:23:34 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:49.311 10:23:34 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:49.311 10:23:34 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:49.311 10:23:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:49.311 ************************************ 00:29:49.311 START TEST nvmf_aer 00:29:49.311 ************************************ 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:49.311 * Looking for test storage... 00:29:49.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:49.311 10:23:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:55.914 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:55.914 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.914 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:55.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:55.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.915 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:56.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:29:56.178 00:29:56.178 --- 10.0.0.2 ping statistics --- 00:29:56.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.178 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:29:56.178 00:29:56.178 --- 10.0.0.1 ping statistics --- 00:29:56.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.178 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2974780 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2974780 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 2974780 ']' 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:56.178 10:23:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.440 [2024-05-15 10:23:42.003754] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:29:56.440 [2024-05-15 10:23:42.003806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.440 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.440 [2024-05-15 10:23:42.070629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.440 [2024-05-15 10:23:42.102763] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.440 [2024-05-15 10:23:42.102802] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.440 [2024-05-15 10:23:42.102809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.440 [2024-05-15 10:23:42.102816] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.440 [2024-05-15 10:23:42.102822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.440 [2024-05-15 10:23:42.102957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.440 [2024-05-15 10:23:42.103088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.440 [2024-05-15 10:23:42.103245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.440 [2024-05-15 10:23:42.103246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.440 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.440 [2024-05-15 10:23:42.232106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.701 Malloc0 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.701 [2024-05-15 10:23:42.291225] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:56.701 [2024-05-15 10:23:42.291471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.701 [ 00:29:56.701 { 00:29:56.701 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:56.701 "subtype": "Discovery", 00:29:56.701 "listen_addresses": [], 00:29:56.701 "allow_any_host": true, 00:29:56.701 "hosts": [] 00:29:56.701 }, 00:29:56.701 { 00:29:56.701 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.701 "subtype": "NVMe", 00:29:56.701 "listen_addresses": [ 00:29:56.701 { 00:29:56.701 "trtype": "TCP", 00:29:56.701 "adrfam": "IPv4", 00:29:56.701 "traddr": "10.0.0.2", 00:29:56.701 "trsvcid": "4420" 00:29:56.701 } 00:29:56.701 ], 00:29:56.701 "allow_any_host": true, 00:29:56.701 "hosts": [], 00:29:56.701 "serial_number": "SPDK00000000000001", 00:29:56.701 "model_number": "SPDK bdev Controller", 00:29:56.701 "max_namespaces": 2, 00:29:56.701 "min_cntlid": 1, 00:29:56.701 "max_cntlid": 65519, 00:29:56.701 "namespaces": [ 00:29:56.701 { 00:29:56.701 "nsid": 1, 00:29:56.701 "bdev_name": "Malloc0", 00:29:56.701 "name": "Malloc0", 00:29:56.701 "nguid": "75FEE4506FE44EBEB8E8FF87DB6033FE", 00:29:56.701 "uuid": "75fee450-6fe4-4ebe-b8e8-ff87db6033fe" 00:29:56.701 } 00:29:56.701 ] 00:29:56.701 } 00:29:56.701 ] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2974902 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:29:56.701 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:29:56.701 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 Malloc1 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 [ 00:29:56.963 { 00:29:56.963 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:56.963 "subtype": "Discovery", 00:29:56.963 "listen_addresses": [], 00:29:56.963 "allow_any_host": true, 00:29:56.963 "hosts": [] 00:29:56.963 }, 00:29:56.963 { 00:29:56.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.963 "subtype": "NVMe", 00:29:56.963 "listen_addresses": [ 00:29:56.963 { 00:29:56.963 "trtype": "TCP", 00:29:56.963 "adrfam": "IPv4", 00:29:56.963 "traddr": "10.0.0.2", 00:29:56.963 "trsvcid": "4420" 00:29:56.963 } 00:29:56.963 ], 00:29:56.963 "allow_any_host": true, 00:29:56.963 "hosts": [], 00:29:56.963 "serial_number": "SPDK00000000000001", 00:29:56.963 "model_number": "SPDK bdev Controller", 00:29:56.963 "max_namespaces": 2, 00:29:56.963 "min_cntlid": 1, 00:29:56.963 "max_cntlid": 65519, 00:29:56.963 "namespaces": [ 00:29:56.963 { 00:29:56.963 "nsid": 1, 00:29:56.963 "bdev_name": "Malloc0", 00:29:56.963 "name": "Malloc0", 00:29:56.963 "nguid": "75FEE4506FE44EBEB8E8FF87DB6033FE", 00:29:56.963 "uuid": "75fee450-6fe4-4ebe-b8e8-ff87db6033fe" 00:29:56.963 Asynchronous Event Request test 00:29:56.963 Attaching to 10.0.0.2 00:29:56.963 Attached to 10.0.0.2 00:29:56.963 Registering asynchronous event callbacks... 00:29:56.963 Starting namespace attribute notice tests for all controllers... 00:29:56.963 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:56.963 aer_cb - Changed Namespace 00:29:56.963 Cleaning up... 00:29:56.963 }, 00:29:56.963 { 00:29:56.963 "nsid": 2, 00:29:56.963 "bdev_name": "Malloc1", 00:29:56.963 "name": "Malloc1", 00:29:56.963 "nguid": "EF31023C28AF4DF3AAF3CBFCB2BB69D5", 00:29:56.963 "uuid": "ef31023c-28af-4df3-aaf3-cbfcb2bb69d5" 00:29:56.963 } 00:29:56.963 ] 00:29:56.963 } 00:29:56.963 ] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2974902 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.963 rmmod nvme_tcp 00:29:56.963 rmmod nvme_fabrics 00:29:56.963 rmmod nvme_keyring 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2974780 ']' 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2974780 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 2974780 ']' 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 2974780 00:29:56.963 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:29:56.964 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:56.964 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2974780 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2974780' 00:29:57.226 killing process with pid 2974780 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 2974780 00:29:57.226 [2024-05-15 10:23:42.767584] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 2974780 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.226 10:23:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.777 10:23:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:59.777 00:29:59.777 real 0m10.354s 00:29:59.777 user 0m5.098s 00:29:59.777 sys 0m5.770s 00:29:59.777 10:23:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:59.777 10:23:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:59.777 ************************************ 00:29:59.777 END TEST nvmf_aer 00:29:59.777 ************************************ 00:29:59.777 10:23:45 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:59.777 10:23:45 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:59.777 10:23:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:59.777 10:23:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.777 ************************************ 00:29:59.777 START TEST nvmf_async_init 00:29:59.777 ************************************ 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:59.777 * Looking for test storage... 00:29:59.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=173b8f1606e34f8593cf0ff3a76b3803 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:59.777 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:59.778 10:23:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:59.778 10:23:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:06.402 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.402 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:06.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:06.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:06.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.403 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:06.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:30:06.694 00:30:06.694 --- 10.0.0.2 ping statistics --- 00:30:06.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.694 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.558 ms 00:30:06.694 00:30:06.694 --- 10.0.0.1 ping statistics --- 00:30:06.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.694 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:06.694 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2979117 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2979117 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 2979117 ']' 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:06.955 10:23:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:06.955 [2024-05-15 10:23:52.541233] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:06.955 [2024-05-15 10:23:52.541298] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.955 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.955 [2024-05-15 10:23:52.612597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.955 [2024-05-15 10:23:52.644275] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.955 [2024-05-15 10:23:52.644321] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.955 [2024-05-15 10:23:52.644328] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.955 [2024-05-15 10:23:52.644335] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.955 [2024-05-15 10:23:52.644341] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.955 [2024-05-15 10:23:52.644359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.545 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:07.545 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:30:07.545 10:23:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:07.545 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:07.545 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 [2024-05-15 10:23:53.352600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 null0 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 173b8f1606e34f8593cf0ff3a76b3803 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:07.806 [2024-05-15 10:23:53.408687] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:07.806 [2024-05-15 10:23:53.408876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:07.806 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.067 nvme0n1 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.067 [ 00:30:08.067 { 00:30:08.067 "name": "nvme0n1", 00:30:08.067 "aliases": [ 00:30:08.067 "173b8f16-06e3-4f85-93cf-0ff3a76b3803" 00:30:08.067 ], 00:30:08.067 "product_name": "NVMe disk", 00:30:08.067 "block_size": 512, 00:30:08.067 "num_blocks": 2097152, 00:30:08.067 "uuid": "173b8f16-06e3-4f85-93cf-0ff3a76b3803", 00:30:08.067 "assigned_rate_limits": { 00:30:08.067 "rw_ios_per_sec": 0, 00:30:08.067 "rw_mbytes_per_sec": 0, 00:30:08.067 "r_mbytes_per_sec": 0, 00:30:08.067 "w_mbytes_per_sec": 0 00:30:08.067 }, 00:30:08.067 "claimed": false, 00:30:08.067 "zoned": false, 00:30:08.067 "supported_io_types": { 00:30:08.067 "read": true, 00:30:08.067 "write": true, 00:30:08.067 "unmap": false, 00:30:08.067 "write_zeroes": true, 00:30:08.067 "flush": true, 00:30:08.067 "reset": true, 00:30:08.067 "compare": true, 00:30:08.067 "compare_and_write": true, 00:30:08.067 "abort": true, 00:30:08.067 "nvme_admin": true, 00:30:08.067 "nvme_io": true 00:30:08.067 }, 00:30:08.067 "memory_domains": [ 00:30:08.067 { 00:30:08.067 "dma_device_id": "system", 00:30:08.067 "dma_device_type": 1 00:30:08.067 } 00:30:08.067 ], 00:30:08.067 "driver_specific": { 00:30:08.067 "nvme": [ 00:30:08.067 { 00:30:08.067 "trid": { 00:30:08.067 "trtype": "TCP", 00:30:08.067 "adrfam": "IPv4", 00:30:08.067 "traddr": "10.0.0.2", 00:30:08.067 "trsvcid": "4420", 00:30:08.067 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.067 }, 00:30:08.067 "ctrlr_data": { 00:30:08.067 "cntlid": 1, 00:30:08.067 "vendor_id": "0x8086", 00:30:08.067 "model_number": "SPDK bdev Controller", 00:30:08.067 "serial_number": "00000000000000000000", 00:30:08.067 "firmware_revision": "24.05", 00:30:08.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.067 "oacs": { 00:30:08.067 "security": 0, 00:30:08.067 "format": 0, 00:30:08.067 "firmware": 0, 00:30:08.067 "ns_manage": 0 00:30:08.067 }, 00:30:08.067 "multi_ctrlr": true, 00:30:08.067 "ana_reporting": false 00:30:08.067 }, 00:30:08.067 "vs": { 00:30:08.067 "nvme_version": "1.3" 00:30:08.067 }, 00:30:08.067 "ns_data": { 00:30:08.067 "id": 1, 00:30:08.067 "can_share": true 00:30:08.067 } 00:30:08.067 } 00:30:08.067 ], 00:30:08.067 "mp_policy": "active_passive" 00:30:08.067 } 00:30:08.067 } 00:30:08.067 ] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.067 [2024-05-15 10:23:53.673400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:08.067 [2024-05-15 10:23:53.673458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf2b50 (9): Bad file descriptor 00:30:08.067 [2024-05-15 10:23:53.805387] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.067 [ 00:30:08.067 { 00:30:08.067 "name": "nvme0n1", 00:30:08.067 "aliases": [ 00:30:08.067 "173b8f16-06e3-4f85-93cf-0ff3a76b3803" 00:30:08.067 ], 00:30:08.067 "product_name": "NVMe disk", 00:30:08.067 "block_size": 512, 00:30:08.067 "num_blocks": 2097152, 00:30:08.067 "uuid": "173b8f16-06e3-4f85-93cf-0ff3a76b3803", 00:30:08.067 "assigned_rate_limits": { 00:30:08.067 "rw_ios_per_sec": 0, 00:30:08.067 "rw_mbytes_per_sec": 0, 00:30:08.067 "r_mbytes_per_sec": 0, 00:30:08.067 "w_mbytes_per_sec": 0 00:30:08.067 }, 00:30:08.067 "claimed": false, 00:30:08.067 "zoned": false, 00:30:08.067 "supported_io_types": { 00:30:08.067 "read": true, 00:30:08.067 "write": true, 00:30:08.067 "unmap": false, 00:30:08.067 "write_zeroes": true, 00:30:08.067 "flush": true, 00:30:08.067 "reset": true, 00:30:08.067 "compare": true, 00:30:08.067 "compare_and_write": true, 00:30:08.067 "abort": true, 00:30:08.067 "nvme_admin": true, 00:30:08.067 "nvme_io": true 00:30:08.067 }, 00:30:08.067 "memory_domains": [ 00:30:08.067 { 00:30:08.067 "dma_device_id": "system", 00:30:08.067 "dma_device_type": 1 00:30:08.067 } 00:30:08.067 ], 00:30:08.067 "driver_specific": { 00:30:08.067 "nvme": [ 00:30:08.067 { 00:30:08.067 "trid": { 00:30:08.067 "trtype": "TCP", 00:30:08.067 "adrfam": "IPv4", 00:30:08.067 "traddr": "10.0.0.2", 00:30:08.067 "trsvcid": "4420", 00:30:08.067 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.067 }, 00:30:08.067 "ctrlr_data": { 00:30:08.067 "cntlid": 2, 00:30:08.067 "vendor_id": "0x8086", 00:30:08.067 "model_number": "SPDK bdev Controller", 00:30:08.067 "serial_number": "00000000000000000000", 00:30:08.067 "firmware_revision": "24.05", 00:30:08.067 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.067 "oacs": { 00:30:08.067 "security": 0, 00:30:08.067 "format": 0, 00:30:08.067 "firmware": 0, 00:30:08.067 "ns_manage": 0 00:30:08.067 }, 00:30:08.067 "multi_ctrlr": true, 00:30:08.067 "ana_reporting": false 00:30:08.067 }, 00:30:08.067 "vs": { 00:30:08.067 "nvme_version": "1.3" 00:30:08.067 }, 00:30:08.067 "ns_data": { 00:30:08.067 "id": 1, 00:30:08.067 "can_share": true 00:30:08.067 } 00:30:08.067 } 00:30:08.067 ], 00:30:08.067 "mp_policy": "active_passive" 00:30:08.067 } 00:30:08.067 } 00:30:08.067 ] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.958fCGmwM7 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.958fCGmwM7 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.067 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.329 [2024-05-15 10:23:53.874016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:08.329 [2024-05-15 10:23:53.874135] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.958fCGmwM7 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.329 [2024-05-15 10:23:53.886038] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.958fCGmwM7 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.329 [2024-05-15 10:23:53.898071] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:08.329 [2024-05-15 10:23:53.898109] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:08.329 nvme0n1 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.329 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.329 [ 00:30:08.329 { 00:30:08.329 "name": "nvme0n1", 00:30:08.329 "aliases": [ 00:30:08.329 "173b8f16-06e3-4f85-93cf-0ff3a76b3803" 00:30:08.329 ], 00:30:08.329 "product_name": "NVMe disk", 00:30:08.329 "block_size": 512, 00:30:08.329 "num_blocks": 2097152, 00:30:08.329 "uuid": "173b8f16-06e3-4f85-93cf-0ff3a76b3803", 00:30:08.329 "assigned_rate_limits": { 00:30:08.329 "rw_ios_per_sec": 0, 00:30:08.329 "rw_mbytes_per_sec": 0, 00:30:08.329 "r_mbytes_per_sec": 0, 00:30:08.329 "w_mbytes_per_sec": 0 00:30:08.329 }, 00:30:08.329 "claimed": false, 00:30:08.329 "zoned": false, 00:30:08.329 "supported_io_types": { 00:30:08.329 "read": true, 00:30:08.329 "write": true, 00:30:08.329 "unmap": false, 00:30:08.329 "write_zeroes": true, 00:30:08.329 "flush": true, 00:30:08.329 "reset": true, 00:30:08.330 "compare": true, 00:30:08.330 "compare_and_write": true, 00:30:08.330 "abort": true, 00:30:08.330 "nvme_admin": true, 00:30:08.330 "nvme_io": true 00:30:08.330 }, 00:30:08.330 "memory_domains": [ 00:30:08.330 { 00:30:08.330 "dma_device_id": "system", 00:30:08.330 "dma_device_type": 1 00:30:08.330 } 00:30:08.330 ], 00:30:08.330 "driver_specific": { 00:30:08.330 "nvme": [ 00:30:08.330 { 00:30:08.330 "trid": { 00:30:08.330 "trtype": "TCP", 00:30:08.330 "adrfam": "IPv4", 00:30:08.330 "traddr": "10.0.0.2", 00:30:08.330 "trsvcid": "4421", 00:30:08.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:08.330 }, 00:30:08.330 "ctrlr_data": { 00:30:08.330 "cntlid": 3, 00:30:08.330 "vendor_id": "0x8086", 00:30:08.330 "model_number": "SPDK bdev Controller", 00:30:08.330 "serial_number": "00000000000000000000", 00:30:08.330 "firmware_revision": "24.05", 00:30:08.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:08.330 "oacs": { 00:30:08.330 "security": 0, 00:30:08.330 "format": 0, 00:30:08.330 "firmware": 0, 00:30:08.330 "ns_manage": 0 00:30:08.330 }, 00:30:08.330 "multi_ctrlr": true, 00:30:08.330 "ana_reporting": false 00:30:08.330 }, 00:30:08.330 "vs": { 00:30:08.330 "nvme_version": "1.3" 00:30:08.330 }, 00:30:08.330 "ns_data": { 00:30:08.330 "id": 1, 00:30:08.330 "can_share": true 00:30:08.330 } 00:30:08.330 } 00:30:08.330 ], 00:30:08.330 "mp_policy": "active_passive" 00:30:08.330 } 00:30:08.330 } 00:30:08.330 ] 00:30:08.330 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.330 10:23:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.330 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:08.330 10:23:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.958fCGmwM7 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.330 rmmod nvme_tcp 00:30:08.330 rmmod nvme_fabrics 00:30:08.330 rmmod nvme_keyring 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2979117 ']' 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2979117 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 2979117 ']' 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 2979117 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:08.330 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2979117 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2979117' 00:30:08.593 killing process with pid 2979117 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 2979117 00:30:08.593 [2024-05-15 10:23:54.157518] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:08.593 [2024-05-15 10:23:54.157546] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:08.593 [2024-05-15 10:23:54.157554] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 2979117 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.593 10:23:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.144 10:23:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.144 00:30:11.144 real 0m11.291s 00:30:11.144 user 0m4.059s 00:30:11.144 sys 0m5.698s 00:30:11.144 10:23:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:11.144 10:23:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:11.144 ************************************ 00:30:11.144 END TEST nvmf_async_init 00:30:11.144 ************************************ 00:30:11.144 10:23:56 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:11.144 10:23:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:11.144 10:23:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:11.144 10:23:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.144 ************************************ 00:30:11.144 START TEST dma 00:30:11.144 ************************************ 00:30:11.144 10:23:56 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:11.144 * Looking for test storage... 00:30:11.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.144 10:23:56 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.144 10:23:56 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.144 10:23:56 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.144 10:23:56 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.144 10:23:56 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.144 10:23:56 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.144 10:23:56 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.144 10:23:56 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:11.144 10:23:56 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.144 10:23:56 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.144 10:23:56 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:11.144 10:23:56 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:11.144 00:30:11.144 real 0m0.134s 00:30:11.144 user 0m0.065s 00:30:11.144 sys 0m0.078s 00:30:11.144 10:23:56 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:11.144 10:23:56 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:11.145 ************************************ 00:30:11.145 END TEST dma 00:30:11.145 ************************************ 00:30:11.145 10:23:56 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:11.145 10:23:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:11.145 10:23:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:11.145 10:23:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.145 ************************************ 00:30:11.145 START TEST nvmf_identify 00:30:11.145 ************************************ 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:11.145 * Looking for test storage... 00:30:11.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:11.145 10:23:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:19.301 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:19.301 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:19.301 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:19.301 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:19.301 10:24:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:19.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:19.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:30:19.301 00:30:19.301 --- 10.0.0.2 ping statistics --- 00:30:19.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.301 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:19.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:19.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.462 ms 00:30:19.301 00:30:19.301 --- 10.0.0.1 ping statistics --- 00:30:19.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:19.301 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:19.301 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2983689 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2983689 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 2983689 ']' 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:19.302 10:24:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.302 [2024-05-15 10:24:04.234633] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:19.302 [2024-05-15 10:24:04.234698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:19.302 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.302 [2024-05-15 10:24:04.305868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:19.302 [2024-05-15 10:24:04.347199] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.302 [2024-05-15 10:24:04.347245] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.302 [2024-05-15 10:24:04.347256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.302 [2024-05-15 10:24:04.347262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.302 [2024-05-15 10:24:04.347267] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.302 [2024-05-15 10:24:04.347411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.302 [2024-05-15 10:24:04.347645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:19.302 [2024-05-15 10:24:04.347806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:19.302 [2024-05-15 10:24:04.347807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.302 [2024-05-15 10:24:05.028866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.302 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.566 Malloc0 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.566 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.567 [2024-05-15 10:24:05.128149] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:19.567 [2024-05-15 10:24:05.128380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.567 [ 00:30:19.567 { 00:30:19.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:19.567 "subtype": "Discovery", 00:30:19.567 "listen_addresses": [ 00:30:19.567 { 00:30:19.567 "trtype": "TCP", 00:30:19.567 "adrfam": "IPv4", 00:30:19.567 "traddr": "10.0.0.2", 00:30:19.567 "trsvcid": "4420" 00:30:19.567 } 00:30:19.567 ], 00:30:19.567 "allow_any_host": true, 00:30:19.567 "hosts": [] 00:30:19.567 }, 00:30:19.567 { 00:30:19.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:19.567 "subtype": "NVMe", 00:30:19.567 "listen_addresses": [ 00:30:19.567 { 00:30:19.567 "trtype": "TCP", 00:30:19.567 "adrfam": "IPv4", 00:30:19.567 "traddr": "10.0.0.2", 00:30:19.567 "trsvcid": "4420" 00:30:19.567 } 00:30:19.567 ], 00:30:19.567 "allow_any_host": true, 00:30:19.567 "hosts": [], 00:30:19.567 "serial_number": "SPDK00000000000001", 00:30:19.567 "model_number": "SPDK bdev Controller", 00:30:19.567 "max_namespaces": 32, 00:30:19.567 "min_cntlid": 1, 00:30:19.567 "max_cntlid": 65519, 00:30:19.567 "namespaces": [ 00:30:19.567 { 00:30:19.567 "nsid": 1, 00:30:19.567 "bdev_name": "Malloc0", 00:30:19.567 "name": "Malloc0", 00:30:19.567 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:19.567 "eui64": "ABCDEF0123456789", 00:30:19.567 "uuid": "51932790-ab1e-4c0c-84f4-04f95f8a65fd" 00:30:19.567 } 00:30:19.567 ] 00:30:19.567 } 00:30:19.567 ] 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.567 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:19.567 [2024-05-15 10:24:05.184027] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:19.567 [2024-05-15 10:24:05.184073] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983976 ] 00:30:19.567 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.567 [2024-05-15 10:24:05.215932] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:19.567 [2024-05-15 10:24:05.215982] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:19.567 [2024-05-15 10:24:05.215987] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:19.567 [2024-05-15 10:24:05.215998] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:19.567 [2024-05-15 10:24:05.216005] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:19.567 [2024-05-15 10:24:05.219321] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:19.567 [2024-05-15 10:24:05.219350] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1839a20 0 00:30:19.567 [2024-05-15 10:24:05.219753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:19.567 [2024-05-15 10:24:05.219773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:19.567 [2024-05-15 10:24:05.219777] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:19.567 [2024-05-15 10:24:05.219781] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:19.567 [2024-05-15 10:24:05.219818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.219824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.219828] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.567 [2024-05-15 10:24:05.219843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:19.567 [2024-05-15 10:24:05.219860] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.567 [2024-05-15 10:24:05.227301] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.567 [2024-05-15 10:24:05.227310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.567 [2024-05-15 10:24:05.227313] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.227318] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.567 [2024-05-15 10:24:05.227329] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:19.567 [2024-05-15 10:24:05.227337] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:19.567 [2024-05-15 10:24:05.227346] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:19.567 [2024-05-15 10:24:05.227357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.227361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.227364] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.567 [2024-05-15 10:24:05.227372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.567 [2024-05-15 10:24:05.227384] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.567 [2024-05-15 10:24:05.227706] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.567 [2024-05-15 10:24:05.227718] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.567 [2024-05-15 10:24:05.227721] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.227726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.567 [2024-05-15 10:24:05.227733] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:19.567 [2024-05-15 10:24:05.227741] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:19.567 [2024-05-15 10:24:05.227749] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.227753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.227756] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.567 [2024-05-15 10:24:05.227764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.567 [2024-05-15 10:24:05.227776] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.567 [2024-05-15 10:24:05.228021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.567 [2024-05-15 10:24:05.228029] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.567 [2024-05-15 10:24:05.228032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.228036] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.567 [2024-05-15 10:24:05.228043] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:19.567 [2024-05-15 10:24:05.228051] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:19.567 [2024-05-15 10:24:05.228058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.228062] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.567 [2024-05-15 10:24:05.228066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.567 [2024-05-15 10:24:05.228073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.567 [2024-05-15 10:24:05.228084] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.228370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.568 [2024-05-15 10:24:05.228379] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.568 [2024-05-15 10:24:05.228382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.228386] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.568 [2024-05-15 10:24:05.228393] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:19.568 [2024-05-15 10:24:05.228407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.228411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.228414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.228421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.568 [2024-05-15 10:24:05.228434] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.228695] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.568 [2024-05-15 10:24:05.228703] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.568 [2024-05-15 10:24:05.228706] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.228710] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.568 [2024-05-15 10:24:05.228716] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:19.568 [2024-05-15 10:24:05.228721] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:19.568 [2024-05-15 10:24:05.228728] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:19.568 [2024-05-15 10:24:05.228833] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:19.568 [2024-05-15 10:24:05.228838] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:19.568 [2024-05-15 10:24:05.228847] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.228851] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.228854] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.228861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.568 [2024-05-15 10:24:05.228873] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.229151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.568 [2024-05-15 10:24:05.229158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.568 [2024-05-15 10:24:05.229162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.568 [2024-05-15 10:24:05.229172] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:19.568 [2024-05-15 10:24:05.229182] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229189] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.229196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.568 [2024-05-15 10:24:05.229207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.229460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.568 [2024-05-15 10:24:05.229469] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.568 [2024-05-15 10:24:05.229472] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229476] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.568 [2024-05-15 10:24:05.229485] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:19.568 [2024-05-15 10:24:05.229490] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:19.568 [2024-05-15 10:24:05.229498] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:19.568 [2024-05-15 10:24:05.229512] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:19.568 [2024-05-15 10:24:05.229521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229525] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.229532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.568 [2024-05-15 10:24:05.229544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.229932] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.568 [2024-05-15 10:24:05.229943] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.568 [2024-05-15 10:24:05.229946] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229951] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1839a20): datao=0, datal=4096, cccid=0 00:30:19.568 [2024-05-15 10:24:05.229955] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18a4400) on tqpair(0x1839a20): expected_datao=0, payload_size=4096 00:30:19.568 [2024-05-15 10:24:05.229960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229968] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.229973] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.568 [2024-05-15 10:24:05.272308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.568 [2024-05-15 10:24:05.272311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272315] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.568 [2024-05-15 10:24:05.272324] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:19.568 [2024-05-15 10:24:05.272329] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:19.568 [2024-05-15 10:24:05.272333] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:19.568 [2024-05-15 10:24:05.272338] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:19.568 [2024-05-15 10:24:05.272343] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:19.568 [2024-05-15 10:24:05.272348] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:19.568 [2024-05-15 10:24:05.272360] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:19.568 [2024-05-15 10:24:05.272369] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272373] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272376] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.272384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:19.568 [2024-05-15 10:24:05.272400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.272656] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.568 [2024-05-15 10:24:05.272665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.568 [2024-05-15 10:24:05.272668] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272672] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4400) on tqpair=0x1839a20 00:30:19.568 [2024-05-15 10:24:05.272685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272689] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.272699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.568 [2024-05-15 10:24:05.272705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272712] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.272718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.568 [2024-05-15 10:24:05.272724] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.272737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.568 [2024-05-15 10:24:05.272743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272749] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.272755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.568 [2024-05-15 10:24:05.272760] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:19.568 [2024-05-15 10:24:05.272768] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:19.568 [2024-05-15 10:24:05.272775] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.568 [2024-05-15 10:24:05.272778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1839a20) 00:30:19.568 [2024-05-15 10:24:05.272785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.568 [2024-05-15 10:24:05.272799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4400, cid 0, qid 0 00:30:19.568 [2024-05-15 10:24:05.272804] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4560, cid 1, qid 0 00:30:19.568 [2024-05-15 10:24:05.272808] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a46c0, cid 2, qid 0 00:30:19.568 [2024-05-15 10:24:05.272813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.568 [2024-05-15 10:24:05.272818] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4980, cid 4, qid 0 00:30:19.569 [2024-05-15 10:24:05.273170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.569 [2024-05-15 10:24:05.273178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.569 [2024-05-15 10:24:05.273181] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4980) on tqpair=0x1839a20 00:30:19.569 [2024-05-15 10:24:05.273197] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:19.569 [2024-05-15 10:24:05.273203] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:19.569 [2024-05-15 10:24:05.273215] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273219] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1839a20) 00:30:19.569 [2024-05-15 10:24:05.273225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.569 [2024-05-15 10:24:05.273237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4980, cid 4, qid 0 00:30:19.569 [2024-05-15 10:24:05.273527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.569 [2024-05-15 10:24:05.273536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.569 [2024-05-15 10:24:05.273540] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273544] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1839a20): datao=0, datal=4096, cccid=4 00:30:19.569 [2024-05-15 10:24:05.273548] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18a4980) on tqpair(0x1839a20): expected_datao=0, payload_size=4096 00:30:19.569 [2024-05-15 10:24:05.273553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273560] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273564] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.569 [2024-05-15 10:24:05.273792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.569 [2024-05-15 10:24:05.273795] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273799] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4980) on tqpair=0x1839a20 00:30:19.569 [2024-05-15 10:24:05.273813] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:19.569 [2024-05-15 10:24:05.273841] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273845] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1839a20) 00:30:19.569 [2024-05-15 10:24:05.273852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.569 [2024-05-15 10:24:05.273859] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.273866] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1839a20) 00:30:19.569 [2024-05-15 10:24:05.273873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.569 [2024-05-15 10:24:05.273891] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4980, cid 4, qid 0 00:30:19.569 [2024-05-15 10:24:05.273896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4ae0, cid 5, qid 0 00:30:19.569 [2024-05-15 10:24:05.274205] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.569 [2024-05-15 10:24:05.274213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.569 [2024-05-15 10:24:05.274217] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.274221] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1839a20): datao=0, datal=1024, cccid=4 00:30:19.569 [2024-05-15 10:24:05.274225] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18a4980) on tqpair(0x1839a20): expected_datao=0, payload_size=1024 00:30:19.569 [2024-05-15 10:24:05.274233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.274239] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.274243] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.274249] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.569 [2024-05-15 10:24:05.274254] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.569 [2024-05-15 10:24:05.274258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.274261] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4ae0) on tqpair=0x1839a20 00:30:19.569 [2024-05-15 10:24:05.314579] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.569 [2024-05-15 10:24:05.314592] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.569 [2024-05-15 10:24:05.314595] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.314599] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4980) on tqpair=0x1839a20 00:30:19.569 [2024-05-15 10:24:05.314612] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.314616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1839a20) 00:30:19.569 [2024-05-15 10:24:05.314623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.569 [2024-05-15 10:24:05.314640] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4980, cid 4, qid 0 00:30:19.569 [2024-05-15 10:24:05.314894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.569 [2024-05-15 10:24:05.314903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.569 [2024-05-15 10:24:05.314907] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.314910] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1839a20): datao=0, datal=3072, cccid=4 00:30:19.569 [2024-05-15 10:24:05.314915] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18a4980) on tqpair(0x1839a20): expected_datao=0, payload_size=3072 00:30:19.569 [2024-05-15 10:24:05.314919] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.314926] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.314930] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.315333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.569 [2024-05-15 10:24:05.315340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.569 [2024-05-15 10:24:05.315343] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.315347] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4980) on tqpair=0x1839a20 00:30:19.569 [2024-05-15 10:24:05.315357] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.315361] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1839a20) 00:30:19.569 [2024-05-15 10:24:05.315367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.569 [2024-05-15 10:24:05.315382] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4980, cid 4, qid 0 00:30:19.569 [2024-05-15 10:24:05.315667] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.569 [2024-05-15 10:24:05.315675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.569 [2024-05-15 10:24:05.315679] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.315683] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1839a20): datao=0, datal=8, cccid=4 00:30:19.569 [2024-05-15 10:24:05.315687] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18a4980) on tqpair(0x1839a20): expected_datao=0, payload_size=8 00:30:19.569 [2024-05-15 10:24:05.315691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.315702] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.569 [2024-05-15 10:24:05.315705] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.837 [2024-05-15 10:24:05.360302] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.837 [2024-05-15 10:24:05.360311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.837 [2024-05-15 10:24:05.360315] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.837 [2024-05-15 10:24:05.360319] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4980) on tqpair=0x1839a20 00:30:19.837 ===================================================== 00:30:19.837 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:19.837 ===================================================== 00:30:19.837 Controller Capabilities/Features 00:30:19.837 ================================ 00:30:19.837 Vendor ID: 0000 00:30:19.837 Subsystem Vendor ID: 0000 00:30:19.837 Serial Number: .................... 00:30:19.837 Model Number: ........................................ 00:30:19.837 Firmware Version: 24.05 00:30:19.837 Recommended Arb Burst: 0 00:30:19.837 IEEE OUI Identifier: 00 00 00 00:30:19.837 Multi-path I/O 00:30:19.837 May have multiple subsystem ports: No 00:30:19.837 May have multiple controllers: No 00:30:19.837 Associated with SR-IOV VF: No 00:30:19.837 Max Data Transfer Size: 131072 00:30:19.837 Max Number of Namespaces: 0 00:30:19.837 Max Number of I/O Queues: 1024 00:30:19.837 NVMe Specification Version (VS): 1.3 00:30:19.837 NVMe Specification Version (Identify): 1.3 00:30:19.837 Maximum Queue Entries: 128 00:30:19.837 Contiguous Queues Required: Yes 00:30:19.837 Arbitration Mechanisms Supported 00:30:19.837 Weighted Round Robin: Not Supported 00:30:19.837 Vendor Specific: Not Supported 00:30:19.837 Reset Timeout: 15000 ms 00:30:19.837 Doorbell Stride: 4 bytes 00:30:19.837 NVM Subsystem Reset: Not Supported 00:30:19.837 Command Sets Supported 00:30:19.837 NVM Command Set: Supported 00:30:19.837 Boot Partition: Not Supported 00:30:19.837 Memory Page Size Minimum: 4096 bytes 00:30:19.837 Memory Page Size Maximum: 4096 bytes 00:30:19.837 Persistent Memory Region: Not Supported 00:30:19.837 Optional Asynchronous Events Supported 00:30:19.837 Namespace Attribute Notices: Not Supported 00:30:19.837 Firmware Activation Notices: Not Supported 00:30:19.837 ANA Change Notices: Not Supported 00:30:19.837 PLE Aggregate Log Change Notices: Not Supported 00:30:19.837 LBA Status Info Alert Notices: Not Supported 00:30:19.837 EGE Aggregate Log Change Notices: Not Supported 00:30:19.837 Normal NVM Subsystem Shutdown event: Not Supported 00:30:19.837 Zone Descriptor Change Notices: Not Supported 00:30:19.837 Discovery Log Change Notices: Supported 00:30:19.837 Controller Attributes 00:30:19.837 128-bit Host Identifier: Not Supported 00:30:19.837 Non-Operational Permissive Mode: Not Supported 00:30:19.837 NVM Sets: Not Supported 00:30:19.837 Read Recovery Levels: Not Supported 00:30:19.837 Endurance Groups: Not Supported 00:30:19.837 Predictable Latency Mode: Not Supported 00:30:19.837 Traffic Based Keep ALive: Not Supported 00:30:19.837 Namespace Granularity: Not Supported 00:30:19.837 SQ Associations: Not Supported 00:30:19.837 UUID List: Not Supported 00:30:19.837 Multi-Domain Subsystem: Not Supported 00:30:19.837 Fixed Capacity Management: Not Supported 00:30:19.837 Variable Capacity Management: Not Supported 00:30:19.837 Delete Endurance Group: Not Supported 00:30:19.837 Delete NVM Set: Not Supported 00:30:19.837 Extended LBA Formats Supported: Not Supported 00:30:19.837 Flexible Data Placement Supported: Not Supported 00:30:19.837 00:30:19.837 Controller Memory Buffer Support 00:30:19.837 ================================ 00:30:19.837 Supported: No 00:30:19.837 00:30:19.837 Persistent Memory Region Support 00:30:19.837 ================================ 00:30:19.837 Supported: No 00:30:19.837 00:30:19.837 Admin Command Set Attributes 00:30:19.837 ============================ 00:30:19.837 Security Send/Receive: Not Supported 00:30:19.837 Format NVM: Not Supported 00:30:19.837 Firmware Activate/Download: Not Supported 00:30:19.837 Namespace Management: Not Supported 00:30:19.837 Device Self-Test: Not Supported 00:30:19.837 Directives: Not Supported 00:30:19.837 NVMe-MI: Not Supported 00:30:19.837 Virtualization Management: Not Supported 00:30:19.837 Doorbell Buffer Config: Not Supported 00:30:19.837 Get LBA Status Capability: Not Supported 00:30:19.837 Command & Feature Lockdown Capability: Not Supported 00:30:19.837 Abort Command Limit: 1 00:30:19.837 Async Event Request Limit: 4 00:30:19.837 Number of Firmware Slots: N/A 00:30:19.837 Firmware Slot 1 Read-Only: N/A 00:30:19.837 Firmware Activation Without Reset: N/A 00:30:19.837 Multiple Update Detection Support: N/A 00:30:19.837 Firmware Update Granularity: No Information Provided 00:30:19.837 Per-Namespace SMART Log: No 00:30:19.837 Asymmetric Namespace Access Log Page: Not Supported 00:30:19.837 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:19.837 Command Effects Log Page: Not Supported 00:30:19.837 Get Log Page Extended Data: Supported 00:30:19.837 Telemetry Log Pages: Not Supported 00:30:19.837 Persistent Event Log Pages: Not Supported 00:30:19.837 Supported Log Pages Log Page: May Support 00:30:19.837 Commands Supported & Effects Log Page: Not Supported 00:30:19.837 Feature Identifiers & Effects Log Page:May Support 00:30:19.837 NVMe-MI Commands & Effects Log Page: May Support 00:30:19.837 Data Area 4 for Telemetry Log: Not Supported 00:30:19.837 Error Log Page Entries Supported: 128 00:30:19.837 Keep Alive: Not Supported 00:30:19.837 00:30:19.837 NVM Command Set Attributes 00:30:19.837 ========================== 00:30:19.837 Submission Queue Entry Size 00:30:19.837 Max: 1 00:30:19.837 Min: 1 00:30:19.837 Completion Queue Entry Size 00:30:19.837 Max: 1 00:30:19.837 Min: 1 00:30:19.837 Number of Namespaces: 0 00:30:19.837 Compare Command: Not Supported 00:30:19.837 Write Uncorrectable Command: Not Supported 00:30:19.838 Dataset Management Command: Not Supported 00:30:19.838 Write Zeroes Command: Not Supported 00:30:19.838 Set Features Save Field: Not Supported 00:30:19.838 Reservations: Not Supported 00:30:19.838 Timestamp: Not Supported 00:30:19.838 Copy: Not Supported 00:30:19.838 Volatile Write Cache: Not Present 00:30:19.838 Atomic Write Unit (Normal): 1 00:30:19.838 Atomic Write Unit (PFail): 1 00:30:19.838 Atomic Compare & Write Unit: 1 00:30:19.838 Fused Compare & Write: Supported 00:30:19.838 Scatter-Gather List 00:30:19.838 SGL Command Set: Supported 00:30:19.838 SGL Keyed: Supported 00:30:19.838 SGL Bit Bucket Descriptor: Not Supported 00:30:19.838 SGL Metadata Pointer: Not Supported 00:30:19.838 Oversized SGL: Not Supported 00:30:19.838 SGL Metadata Address: Not Supported 00:30:19.838 SGL Offset: Supported 00:30:19.838 Transport SGL Data Block: Not Supported 00:30:19.838 Replay Protected Memory Block: Not Supported 00:30:19.838 00:30:19.838 Firmware Slot Information 00:30:19.838 ========================= 00:30:19.838 Active slot: 0 00:30:19.838 00:30:19.838 00:30:19.838 Error Log 00:30:19.838 ========= 00:30:19.838 00:30:19.838 Active Namespaces 00:30:19.838 ================= 00:30:19.838 Discovery Log Page 00:30:19.838 ================== 00:30:19.838 Generation Counter: 2 00:30:19.838 Number of Records: 2 00:30:19.838 Record Format: 0 00:30:19.838 00:30:19.838 Discovery Log Entry 0 00:30:19.838 ---------------------- 00:30:19.838 Transport Type: 3 (TCP) 00:30:19.838 Address Family: 1 (IPv4) 00:30:19.838 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:19.838 Entry Flags: 00:30:19.838 Duplicate Returned Information: 1 00:30:19.838 Explicit Persistent Connection Support for Discovery: 1 00:30:19.838 Transport Requirements: 00:30:19.838 Secure Channel: Not Required 00:30:19.838 Port ID: 0 (0x0000) 00:30:19.838 Controller ID: 65535 (0xffff) 00:30:19.838 Admin Max SQ Size: 128 00:30:19.838 Transport Service Identifier: 4420 00:30:19.838 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:19.838 Transport Address: 10.0.0.2 00:30:19.838 Discovery Log Entry 1 00:30:19.838 ---------------------- 00:30:19.838 Transport Type: 3 (TCP) 00:30:19.838 Address Family: 1 (IPv4) 00:30:19.838 Subsystem Type: 2 (NVM Subsystem) 00:30:19.838 Entry Flags: 00:30:19.838 Duplicate Returned Information: 0 00:30:19.838 Explicit Persistent Connection Support for Discovery: 0 00:30:19.838 Transport Requirements: 00:30:19.838 Secure Channel: Not Required 00:30:19.838 Port ID: 0 (0x0000) 00:30:19.838 Controller ID: 65535 (0xffff) 00:30:19.838 Admin Max SQ Size: 128 00:30:19.838 Transport Service Identifier: 4420 00:30:19.838 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:19.838 Transport Address: 10.0.0.2 [2024-05-15 10:24:05.360404] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:19.838 [2024-05-15 10:24:05.360418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.838 [2024-05-15 10:24:05.360424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.838 [2024-05-15 10:24:05.360430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.838 [2024-05-15 10:24:05.360436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.838 [2024-05-15 10:24:05.360444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.360448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.360451] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.360458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.360472] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.360643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.360651] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.360654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.360658] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.360666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.360670] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.360673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.360680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.360695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.360972] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.360980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.360983] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.360987] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.360993] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:19.838 [2024-05-15 10:24:05.360997] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:19.838 [2024-05-15 10:24:05.361007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361014] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.361021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.361036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.361336] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.361344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.361347] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361351] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.361362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361366] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361370] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.361376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.361388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.361639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.361647] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.361650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.361665] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.361679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.361690] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.361962] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.361969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.361973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.361987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.361995] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.362001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.362013] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.362286] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.362299] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.362302] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.362306] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.362317] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.362321] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.362324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.838 [2024-05-15 10:24:05.362331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.838 [2024-05-15 10:24:05.362346] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.838 [2024-05-15 10:24:05.362612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.838 [2024-05-15 10:24:05.362619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.838 [2024-05-15 10:24:05.362622] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.838 [2024-05-15 10:24:05.362626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.838 [2024-05-15 10:24:05.362637] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.362641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.362644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.839 [2024-05-15 10:24:05.362651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.362662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.839 [2024-05-15 10:24:05.362925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.362933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.362936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.362940] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.839 [2024-05-15 10:24:05.362951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.362954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.362958] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.839 [2024-05-15 10:24:05.362964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.362976] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.839 [2024-05-15 10:24:05.363247] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.363255] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.363258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.839 [2024-05-15 10:24:05.363272] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363276] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363280] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.839 [2024-05-15 10:24:05.363286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.363305] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.839 [2024-05-15 10:24:05.363604] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.363611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.363615] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.839 [2024-05-15 10:24:05.363630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363637] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.839 [2024-05-15 10:24:05.363644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.363655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.839 [2024-05-15 10:24:05.363926] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.363934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.363937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.839 [2024-05-15 10:24:05.363952] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363956] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.363959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.839 [2024-05-15 10:24:05.363966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.363978] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.839 [2024-05-15 10:24:05.364259] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.364266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.364270] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.364273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.839 [2024-05-15 10:24:05.364284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.364288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.368297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1839a20) 00:30:19.839 [2024-05-15 10:24:05.368305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.368319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18a4820, cid 3, qid 0 00:30:19.839 [2024-05-15 10:24:05.368607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.368615] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.368618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.368622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x18a4820) on tqpair=0x1839a20 00:30:19.839 [2024-05-15 10:24:05.368631] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:30:19.839 00:30:19.839 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:19.839 [2024-05-15 10:24:05.406005] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:19.839 [2024-05-15 10:24:05.406044] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983981 ] 00:30:19.839 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.839 [2024-05-15 10:24:05.438800] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:19.839 [2024-05-15 10:24:05.438842] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:19.839 [2024-05-15 10:24:05.438847] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:19.839 [2024-05-15 10:24:05.438857] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:19.839 [2024-05-15 10:24:05.438865] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:19.839 [2024-05-15 10:24:05.442326] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:19.839 [2024-05-15 10:24:05.442353] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1187a20 0 00:30:19.839 [2024-05-15 10:24:05.450300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:19.839 [2024-05-15 10:24:05.450327] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:19.839 [2024-05-15 10:24:05.450332] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:19.839 [2024-05-15 10:24:05.450335] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:19.839 [2024-05-15 10:24:05.450366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.450371] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.450375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.839 [2024-05-15 10:24:05.450387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:19.839 [2024-05-15 10:24:05.450403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.839 [2024-05-15 10:24:05.457300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.457308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.457312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.839 [2024-05-15 10:24:05.457326] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:19.839 [2024-05-15 10:24:05.457333] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:19.839 [2024-05-15 10:24:05.457338] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:19.839 [2024-05-15 10:24:05.457349] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457353] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457356] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.839 [2024-05-15 10:24:05.457364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.457377] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.839 [2024-05-15 10:24:05.457645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.457654] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.457658] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457662] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.839 [2024-05-15 10:24:05.457668] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:19.839 [2024-05-15 10:24:05.457677] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:19.839 [2024-05-15 10:24:05.457686] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457693] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.839 [2024-05-15 10:24:05.457702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.839 [2024-05-15 10:24:05.457714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.839 [2024-05-15 10:24:05.457968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.839 [2024-05-15 10:24:05.457980] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.839 [2024-05-15 10:24:05.457984] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.839 [2024-05-15 10:24:05.457988] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.839 [2024-05-15 10:24:05.457994] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:19.839 [2024-05-15 10:24:05.458003] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:19.839 [2024-05-15 10:24:05.458011] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458015] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.458025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.840 [2024-05-15 10:24:05.458038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.458270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.840 [2024-05-15 10:24:05.458278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.840 [2024-05-15 10:24:05.458281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.840 [2024-05-15 10:24:05.458297] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:19.840 [2024-05-15 10:24:05.458308] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458312] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458316] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.458323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.840 [2024-05-15 10:24:05.458335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.458580] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.840 [2024-05-15 10:24:05.458588] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.840 [2024-05-15 10:24:05.458591] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.840 [2024-05-15 10:24:05.458601] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:19.840 [2024-05-15 10:24:05.458606] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:19.840 [2024-05-15 10:24:05.458614] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:19.840 [2024-05-15 10:24:05.458719] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:19.840 [2024-05-15 10:24:05.458723] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:19.840 [2024-05-15 10:24:05.458732] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.458739] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.458746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.840 [2024-05-15 10:24:05.458761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.458991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.840 [2024-05-15 10:24:05.458999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.840 [2024-05-15 10:24:05.459003] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.840 [2024-05-15 10:24:05.459012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:19.840 [2024-05-15 10:24:05.459023] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459027] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459030] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.459037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.840 [2024-05-15 10:24:05.459049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.459289] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.840 [2024-05-15 10:24:05.459305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.840 [2024-05-15 10:24:05.459309] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459312] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.840 [2024-05-15 10:24:05.459318] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:19.840 [2024-05-15 10:24:05.459323] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:19.840 [2024-05-15 10:24:05.459331] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:19.840 [2024-05-15 10:24:05.459340] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:19.840 [2024-05-15 10:24:05.459348] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.459360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.840 [2024-05-15 10:24:05.459372] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.459650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.840 [2024-05-15 10:24:05.459659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.840 [2024-05-15 10:24:05.459662] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459666] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=4096, cccid=0 00:30:19.840 [2024-05-15 10:24:05.459671] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2400) on tqpair(0x1187a20): expected_datao=0, payload_size=4096 00:30:19.840 [2024-05-15 10:24:05.459675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459925] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.459929] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504297] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.840 [2024-05-15 10:24:05.504306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.840 [2024-05-15 10:24:05.504310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504313] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.840 [2024-05-15 10:24:05.504325] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:19.840 [2024-05-15 10:24:05.504330] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:19.840 [2024-05-15 10:24:05.504334] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:19.840 [2024-05-15 10:24:05.504338] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:19.840 [2024-05-15 10:24:05.504342] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:19.840 [2024-05-15 10:24:05.504347] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:19.840 [2024-05-15 10:24:05.504358] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:19.840 [2024-05-15 10:24:05.504367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504370] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.504382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:19.840 [2024-05-15 10:24:05.504394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.504664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.840 [2024-05-15 10:24:05.504672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.840 [2024-05-15 10:24:05.504675] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504679] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2400) on tqpair=0x1187a20 00:30:19.840 [2024-05-15 10:24:05.504690] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504694] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.504704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.840 [2024-05-15 10:24:05.504710] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504713] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.504722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.840 [2024-05-15 10:24:05.504728] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504732] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504735] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.504741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.840 [2024-05-15 10:24:05.504747] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504750] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504754] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.504759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.840 [2024-05-15 10:24:05.504764] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:19.840 [2024-05-15 10:24:05.504775] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:19.840 [2024-05-15 10:24:05.504782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.840 [2024-05-15 10:24:05.504785] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.840 [2024-05-15 10:24:05.504792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.840 [2024-05-15 10:24:05.504806] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2400, cid 0, qid 0 00:30:19.840 [2024-05-15 10:24:05.504811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2560, cid 1, qid 0 00:30:19.840 [2024-05-15 10:24:05.504816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f26c0, cid 2, qid 0 00:30:19.841 [2024-05-15 10:24:05.504820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.841 [2024-05-15 10:24:05.504825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.841 [2024-05-15 10:24:05.505118] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.841 [2024-05-15 10:24:05.505126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.841 [2024-05-15 10:24:05.505130] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505134] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.841 [2024-05-15 10:24:05.505142] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:19.841 [2024-05-15 10:24:05.505148] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.505156] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.505162] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.505169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.841 [2024-05-15 10:24:05.505183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:19.841 [2024-05-15 10:24:05.505195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.841 [2024-05-15 10:24:05.505476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.841 [2024-05-15 10:24:05.505484] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.841 [2024-05-15 10:24:05.505487] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505491] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.841 [2024-05-15 10:24:05.505545] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.505555] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.505562] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.841 [2024-05-15 10:24:05.505573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.841 [2024-05-15 10:24:05.505588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.841 [2024-05-15 10:24:05.505844] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.841 [2024-05-15 10:24:05.505852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.841 [2024-05-15 10:24:05.505856] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505859] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=4096, cccid=4 00:30:19.841 [2024-05-15 10:24:05.505864] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2980) on tqpair(0x1187a20): expected_datao=0, payload_size=4096 00:30:19.841 [2024-05-15 10:24:05.505868] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505875] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.505878] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.841 [2024-05-15 10:24:05.506084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.841 [2024-05-15 10:24:05.506087] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506091] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.841 [2024-05-15 10:24:05.506107] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:19.841 [2024-05-15 10:24:05.506119] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.506129] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.506136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506140] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.841 [2024-05-15 10:24:05.506146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.841 [2024-05-15 10:24:05.506159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.841 [2024-05-15 10:24:05.506417] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.841 [2024-05-15 10:24:05.506426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.841 [2024-05-15 10:24:05.506429] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506433] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=4096, cccid=4 00:30:19.841 [2024-05-15 10:24:05.506437] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2980) on tqpair(0x1187a20): expected_datao=0, payload_size=4096 00:30:19.841 [2024-05-15 10:24:05.506442] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506448] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506452] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.841 [2024-05-15 10:24:05.506684] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.841 [2024-05-15 10:24:05.506687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506691] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.841 [2024-05-15 10:24:05.506702] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.506711] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.506718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.506725] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.841 [2024-05-15 10:24:05.506732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.841 [2024-05-15 10:24:05.506745] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.841 [2024-05-15 10:24:05.506987] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.841 [2024-05-15 10:24:05.506995] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.841 [2024-05-15 10:24:05.506998] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.507002] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=4096, cccid=4 00:30:19.841 [2024-05-15 10:24:05.507006] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2980) on tqpair(0x1187a20): expected_datao=0, payload_size=4096 00:30:19.841 [2024-05-15 10:24:05.507010] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.507109] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.507115] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.507363] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.841 [2024-05-15 10:24:05.507371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.841 [2024-05-15 10:24:05.507374] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.841 [2024-05-15 10:24:05.507378] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.841 [2024-05-15 10:24:05.507391] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.507399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.507406] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.507412] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.507417] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:19.841 [2024-05-15 10:24:05.507422] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:19.842 [2024-05-15 10:24:05.507426] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:19.842 [2024-05-15 10:24:05.507432] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:19.842 [2024-05-15 10:24:05.507447] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.507451] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.507458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.507464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.507468] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.507471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.507478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:19.842 [2024-05-15 10:24:05.507492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.842 [2024-05-15 10:24:05.507498] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2ae0, cid 5, qid 0 00:30:19.842 [2024-05-15 10:24:05.507759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.507767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.507770] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.507774] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.507782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.507788] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.507791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.507795] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2ae0) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.507805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.507809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.507816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.507827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2ae0, cid 5, qid 0 00:30:19.842 [2024-05-15 10:24:05.508090] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.508097] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.508101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.508104] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2ae0) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.508115] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.508118] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.508125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.508136] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2ae0, cid 5, qid 0 00:30:19.842 [2024-05-15 10:24:05.512299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.512307] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.512310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512314] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2ae0) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.512323] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.512333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.512345] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2ae0, cid 5, qid 0 00:30:19.842 [2024-05-15 10:24:05.512597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.512604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.512608] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512612] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2ae0) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.512625] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512628] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.512635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.512645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512649] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.512655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.512662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.512672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.512682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.512685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1187a20) 00:30:19.842 [2024-05-15 10:24:05.512691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.842 [2024-05-15 10:24:05.512704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2ae0, cid 5, qid 0 00:30:19.842 [2024-05-15 10:24:05.512709] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2980, cid 4, qid 0 00:30:19.842 [2024-05-15 10:24:05.512714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2c40, cid 6, qid 0 00:30:19.842 [2024-05-15 10:24:05.512719] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2da0, cid 7, qid 0 00:30:19.842 [2024-05-15 10:24:05.513031] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.842 [2024-05-15 10:24:05.513039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.842 [2024-05-15 10:24:05.513042] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513046] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=8192, cccid=5 00:30:19.842 [2024-05-15 10:24:05.513050] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2ae0) on tqpair(0x1187a20): expected_datao=0, payload_size=8192 00:30:19.842 [2024-05-15 10:24:05.513054] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513239] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513246] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.842 [2024-05-15 10:24:05.513261] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.842 [2024-05-15 10:24:05.513264] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513268] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=512, cccid=4 00:30:19.842 [2024-05-15 10:24:05.513272] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2980) on tqpair(0x1187a20): expected_datao=0, payload_size=512 00:30:19.842 [2024-05-15 10:24:05.513276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513283] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513286] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513305] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.842 [2024-05-15 10:24:05.513311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.842 [2024-05-15 10:24:05.513314] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513317] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=512, cccid=6 00:30:19.842 [2024-05-15 10:24:05.513322] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2c40) on tqpair(0x1187a20): expected_datao=0, payload_size=512 00:30:19.842 [2024-05-15 10:24:05.513326] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513335] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513338] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513344] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:19.842 [2024-05-15 10:24:05.513349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:19.842 [2024-05-15 10:24:05.513353] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513356] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1187a20): datao=0, datal=4096, cccid=7 00:30:19.842 [2024-05-15 10:24:05.513360] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f2da0) on tqpair(0x1187a20): expected_datao=0, payload_size=4096 00:30:19.842 [2024-05-15 10:24:05.513364] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513371] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513374] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513623] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.513629] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.513632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2ae0) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.513650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.513656] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.513659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513662] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2980) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.513672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.513677] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.842 [2024-05-15 10:24:05.513681] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.842 [2024-05-15 10:24:05.513684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2c40) on tqpair=0x1187a20 00:30:19.842 [2024-05-15 10:24:05.513693] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.842 [2024-05-15 10:24:05.513699] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.843 [2024-05-15 10:24:05.513702] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.843 [2024-05-15 10:24:05.513706] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2da0) on tqpair=0x1187a20 00:30:19.843 ===================================================== 00:30:19.843 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.843 ===================================================== 00:30:19.843 Controller Capabilities/Features 00:30:19.843 ================================ 00:30:19.843 Vendor ID: 8086 00:30:19.843 Subsystem Vendor ID: 8086 00:30:19.843 Serial Number: SPDK00000000000001 00:30:19.843 Model Number: SPDK bdev Controller 00:30:19.843 Firmware Version: 24.05 00:30:19.843 Recommended Arb Burst: 6 00:30:19.843 IEEE OUI Identifier: e4 d2 5c 00:30:19.843 Multi-path I/O 00:30:19.843 May have multiple subsystem ports: Yes 00:30:19.843 May have multiple controllers: Yes 00:30:19.843 Associated with SR-IOV VF: No 00:30:19.843 Max Data Transfer Size: 131072 00:30:19.843 Max Number of Namespaces: 32 00:30:19.843 Max Number of I/O Queues: 127 00:30:19.843 NVMe Specification Version (VS): 1.3 00:30:19.843 NVMe Specification Version (Identify): 1.3 00:30:19.843 Maximum Queue Entries: 128 00:30:19.843 Contiguous Queues Required: Yes 00:30:19.843 Arbitration Mechanisms Supported 00:30:19.843 Weighted Round Robin: Not Supported 00:30:19.843 Vendor Specific: Not Supported 00:30:19.843 Reset Timeout: 15000 ms 00:30:19.843 Doorbell Stride: 4 bytes 00:30:19.843 NVM Subsystem Reset: Not Supported 00:30:19.843 Command Sets Supported 00:30:19.843 NVM Command Set: Supported 00:30:19.843 Boot Partition: Not Supported 00:30:19.843 Memory Page Size Minimum: 4096 bytes 00:30:19.843 Memory Page Size Maximum: 4096 bytes 00:30:19.843 Persistent Memory Region: Not Supported 00:30:19.843 Optional Asynchronous Events Supported 00:30:19.843 Namespace Attribute Notices: Supported 00:30:19.843 Firmware Activation Notices: Not Supported 00:30:19.843 ANA Change Notices: Not Supported 00:30:19.843 PLE Aggregate Log Change Notices: Not Supported 00:30:19.843 LBA Status Info Alert Notices: Not Supported 00:30:19.843 EGE Aggregate Log Change Notices: Not Supported 00:30:19.843 Normal NVM Subsystem Shutdown event: Not Supported 00:30:19.843 Zone Descriptor Change Notices: Not Supported 00:30:19.843 Discovery Log Change Notices: Not Supported 00:30:19.843 Controller Attributes 00:30:19.843 128-bit Host Identifier: Supported 00:30:19.843 Non-Operational Permissive Mode: Not Supported 00:30:19.843 NVM Sets: Not Supported 00:30:19.843 Read Recovery Levels: Not Supported 00:30:19.843 Endurance Groups: Not Supported 00:30:19.843 Predictable Latency Mode: Not Supported 00:30:19.843 Traffic Based Keep ALive: Not Supported 00:30:19.843 Namespace Granularity: Not Supported 00:30:19.843 SQ Associations: Not Supported 00:30:19.843 UUID List: Not Supported 00:30:19.843 Multi-Domain Subsystem: Not Supported 00:30:19.843 Fixed Capacity Management: Not Supported 00:30:19.843 Variable Capacity Management: Not Supported 00:30:19.843 Delete Endurance Group: Not Supported 00:30:19.843 Delete NVM Set: Not Supported 00:30:19.843 Extended LBA Formats Supported: Not Supported 00:30:19.843 Flexible Data Placement Supported: Not Supported 00:30:19.843 00:30:19.843 Controller Memory Buffer Support 00:30:19.843 ================================ 00:30:19.843 Supported: No 00:30:19.843 00:30:19.843 Persistent Memory Region Support 00:30:19.843 ================================ 00:30:19.843 Supported: No 00:30:19.843 00:30:19.843 Admin Command Set Attributes 00:30:19.843 ============================ 00:30:19.843 Security Send/Receive: Not Supported 00:30:19.843 Format NVM: Not Supported 00:30:19.843 Firmware Activate/Download: Not Supported 00:30:19.843 Namespace Management: Not Supported 00:30:19.843 Device Self-Test: Not Supported 00:30:19.843 Directives: Not Supported 00:30:19.843 NVMe-MI: Not Supported 00:30:19.843 Virtualization Management: Not Supported 00:30:19.843 Doorbell Buffer Config: Not Supported 00:30:19.843 Get LBA Status Capability: Not Supported 00:30:19.843 Command & Feature Lockdown Capability: Not Supported 00:30:19.843 Abort Command Limit: 4 00:30:19.843 Async Event Request Limit: 4 00:30:19.843 Number of Firmware Slots: N/A 00:30:19.843 Firmware Slot 1 Read-Only: N/A 00:30:19.843 Firmware Activation Without Reset: N/A 00:30:19.843 Multiple Update Detection Support: N/A 00:30:19.843 Firmware Update Granularity: No Information Provided 00:30:19.843 Per-Namespace SMART Log: No 00:30:19.843 Asymmetric Namespace Access Log Page: Not Supported 00:30:19.843 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:19.843 Command Effects Log Page: Supported 00:30:19.843 Get Log Page Extended Data: Supported 00:30:19.843 Telemetry Log Pages: Not Supported 00:30:19.843 Persistent Event Log Pages: Not Supported 00:30:19.843 Supported Log Pages Log Page: May Support 00:30:19.843 Commands Supported & Effects Log Page: Not Supported 00:30:19.843 Feature Identifiers & Effects Log Page:May Support 00:30:19.843 NVMe-MI Commands & Effects Log Page: May Support 00:30:19.843 Data Area 4 for Telemetry Log: Not Supported 00:30:19.843 Error Log Page Entries Supported: 128 00:30:19.843 Keep Alive: Supported 00:30:19.843 Keep Alive Granularity: 10000 ms 00:30:19.843 00:30:19.843 NVM Command Set Attributes 00:30:19.843 ========================== 00:30:19.843 Submission Queue Entry Size 00:30:19.843 Max: 64 00:30:19.843 Min: 64 00:30:19.843 Completion Queue Entry Size 00:30:19.843 Max: 16 00:30:19.843 Min: 16 00:30:19.843 Number of Namespaces: 32 00:30:19.843 Compare Command: Supported 00:30:19.843 Write Uncorrectable Command: Not Supported 00:30:19.843 Dataset Management Command: Supported 00:30:19.843 Write Zeroes Command: Supported 00:30:19.843 Set Features Save Field: Not Supported 00:30:19.843 Reservations: Supported 00:30:19.843 Timestamp: Not Supported 00:30:19.843 Copy: Supported 00:30:19.843 Volatile Write Cache: Present 00:30:19.843 Atomic Write Unit (Normal): 1 00:30:19.843 Atomic Write Unit (PFail): 1 00:30:19.843 Atomic Compare & Write Unit: 1 00:30:19.843 Fused Compare & Write: Supported 00:30:19.843 Scatter-Gather List 00:30:19.843 SGL Command Set: Supported 00:30:19.843 SGL Keyed: Supported 00:30:19.843 SGL Bit Bucket Descriptor: Not Supported 00:30:19.843 SGL Metadata Pointer: Not Supported 00:30:19.843 Oversized SGL: Not Supported 00:30:19.843 SGL Metadata Address: Not Supported 00:30:19.843 SGL Offset: Supported 00:30:19.843 Transport SGL Data Block: Not Supported 00:30:19.843 Replay Protected Memory Block: Not Supported 00:30:19.843 00:30:19.843 Firmware Slot Information 00:30:19.843 ========================= 00:30:19.843 Active slot: 1 00:30:19.843 Slot 1 Firmware Revision: 24.05 00:30:19.843 00:30:19.843 00:30:19.843 Commands Supported and Effects 00:30:19.843 ============================== 00:30:19.843 Admin Commands 00:30:19.843 -------------- 00:30:19.843 Get Log Page (02h): Supported 00:30:19.843 Identify (06h): Supported 00:30:19.843 Abort (08h): Supported 00:30:19.843 Set Features (09h): Supported 00:30:19.843 Get Features (0Ah): Supported 00:30:19.843 Asynchronous Event Request (0Ch): Supported 00:30:19.843 Keep Alive (18h): Supported 00:30:19.843 I/O Commands 00:30:19.843 ------------ 00:30:19.843 Flush (00h): Supported LBA-Change 00:30:19.843 Write (01h): Supported LBA-Change 00:30:19.843 Read (02h): Supported 00:30:19.843 Compare (05h): Supported 00:30:19.843 Write Zeroes (08h): Supported LBA-Change 00:30:19.843 Dataset Management (09h): Supported LBA-Change 00:30:19.843 Copy (19h): Supported LBA-Change 00:30:19.843 Unknown (79h): Supported LBA-Change 00:30:19.843 Unknown (7Ah): Supported 00:30:19.843 00:30:19.843 Error Log 00:30:19.843 ========= 00:30:19.843 00:30:19.843 Arbitration 00:30:19.843 =========== 00:30:19.843 Arbitration Burst: 1 00:30:19.843 00:30:19.843 Power Management 00:30:19.843 ================ 00:30:19.843 Number of Power States: 1 00:30:19.843 Current Power State: Power State #0 00:30:19.843 Power State #0: 00:30:19.843 Max Power: 0.00 W 00:30:19.843 Non-Operational State: Operational 00:30:19.843 Entry Latency: Not Reported 00:30:19.843 Exit Latency: Not Reported 00:30:19.843 Relative Read Throughput: 0 00:30:19.843 Relative Read Latency: 0 00:30:19.843 Relative Write Throughput: 0 00:30:19.843 Relative Write Latency: 0 00:30:19.843 Idle Power: Not Reported 00:30:19.843 Active Power: Not Reported 00:30:19.843 Non-Operational Permissive Mode: Not Supported 00:30:19.843 00:30:19.843 Health Information 00:30:19.843 ================== 00:30:19.843 Critical Warnings: 00:30:19.843 Available Spare Space: OK 00:30:19.843 Temperature: OK 00:30:19.843 Device Reliability: OK 00:30:19.843 Read Only: No 00:30:19.843 Volatile Memory Backup: OK 00:30:19.843 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:19.843 Temperature Threshold: [2024-05-15 10:24:05.513808] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.843 [2024-05-15 10:24:05.513813] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.513821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.513834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2da0, cid 7, qid 0 00:30:19.844 [2024-05-15 10:24:05.514109] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.514118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.514121] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2da0) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.514157] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:19.844 [2024-05-15 10:24:05.514169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.844 [2024-05-15 10:24:05.514176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.844 [2024-05-15 10:24:05.514185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.844 [2024-05-15 10:24:05.514191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:19.844 [2024-05-15 10:24:05.514199] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514202] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.514213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.514226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.514457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.514465] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.514469] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.514480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514484] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514487] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.514494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.514509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.514759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.514767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.514770] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514774] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.514779] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:19.844 [2024-05-15 10:24:05.514784] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:19.844 [2024-05-15 10:24:05.514794] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.514801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.514808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.514820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.515059] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.515067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.515070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515074] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.515086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.515100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.515114] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.515371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.515380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.515383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.515398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515402] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515405] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.515412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.515424] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.515658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.515666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.515669] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515673] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.515683] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515690] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.515697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.515708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.515968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.515976] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.515979] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515983] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.515994] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.515997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.516001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.516008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.516019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.516259] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.516266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.516269] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.516273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.516284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.516288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.520297] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1187a20) 00:30:19.844 [2024-05-15 10:24:05.520306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:19.844 [2024-05-15 10:24:05.520320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f2820, cid 3, qid 0 00:30:19.844 [2024-05-15 10:24:05.520584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:19.844 [2024-05-15 10:24:05.520593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:19.844 [2024-05-15 10:24:05.520597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:19.844 [2024-05-15 10:24:05.520600] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11f2820) on tqpair=0x1187a20 00:30:19.844 [2024-05-15 10:24:05.520610] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:30:19.844 0 Kelvin (-273 Celsius) 00:30:19.844 Available Spare: 0% 00:30:19.844 Available Spare Threshold: 0% 00:30:19.844 Life Percentage Used: 0% 00:30:19.844 Data Units Read: 0 00:30:19.844 Data Units Written: 0 00:30:19.844 Host Read Commands: 0 00:30:19.844 Host Write Commands: 0 00:30:19.844 Controller Busy Time: 0 minutes 00:30:19.844 Power Cycles: 0 00:30:19.844 Power On Hours: 0 hours 00:30:19.844 Unsafe Shutdowns: 0 00:30:19.844 Unrecoverable Media Errors: 0 00:30:19.844 Lifetime Error Log Entries: 0 00:30:19.844 Warning Temperature Time: 0 minutes 00:30:19.844 Critical Temperature Time: 0 minutes 00:30:19.844 00:30:19.844 Number of Queues 00:30:19.844 ================ 00:30:19.844 Number of I/O Submission Queues: 127 00:30:19.844 Number of I/O Completion Queues: 127 00:30:19.844 00:30:19.844 Active Namespaces 00:30:19.844 ================= 00:30:19.844 Namespace ID:1 00:30:19.844 Error Recovery Timeout: Unlimited 00:30:19.844 Command Set Identifier: NVM (00h) 00:30:19.844 Deallocate: Supported 00:30:19.844 Deallocated/Unwritten Error: Not Supported 00:30:19.844 Deallocated Read Value: Unknown 00:30:19.844 Deallocate in Write Zeroes: Not Supported 00:30:19.844 Deallocated Guard Field: 0xFFFF 00:30:19.844 Flush: Supported 00:30:19.844 Reservation: Supported 00:30:19.844 Namespace Sharing Capabilities: Multiple Controllers 00:30:19.844 Size (in LBAs): 131072 (0GiB) 00:30:19.844 Capacity (in LBAs): 131072 (0GiB) 00:30:19.844 Utilization (in LBAs): 131072 (0GiB) 00:30:19.844 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:19.844 EUI64: ABCDEF0123456789 00:30:19.844 UUID: 51932790-ab1e-4c0c-84f4-04f95f8a65fd 00:30:19.844 Thin Provisioning: Not Supported 00:30:19.844 Per-NS Atomic Units: Yes 00:30:19.844 Atomic Boundary Size (Normal): 0 00:30:19.844 Atomic Boundary Size (PFail): 0 00:30:19.845 Atomic Boundary Offset: 0 00:30:19.845 Maximum Single Source Range Length: 65535 00:30:19.845 Maximum Copy Length: 65535 00:30:19.845 Maximum Source Range Count: 1 00:30:19.845 NGUID/EUI64 Never Reused: No 00:30:19.845 Namespace Write Protected: No 00:30:19.845 Number of LBA Formats: 1 00:30:19.845 Current LBA Format: LBA Format #00 00:30:19.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:19.845 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:19.845 rmmod nvme_tcp 00:30:19.845 rmmod nvme_fabrics 00:30:19.845 rmmod nvme_keyring 00:30:19.845 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2983689 ']' 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2983689 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 2983689 ']' 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 2983689 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2983689 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2983689' 00:30:20.108 killing process with pid 2983689 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 2983689 00:30:20.108 [2024-05-15 10:24:05.690723] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 2983689 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:20.108 10:24:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.665 10:24:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:22.665 00:30:22.665 real 0m11.246s 00:30:22.665 user 0m7.992s 00:30:22.665 sys 0m5.942s 00:30:22.665 10:24:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:22.665 10:24:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:22.665 ************************************ 00:30:22.665 END TEST nvmf_identify 00:30:22.665 ************************************ 00:30:22.665 10:24:07 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:22.665 10:24:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:22.665 10:24:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:22.665 10:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:22.665 ************************************ 00:30:22.665 START TEST nvmf_perf 00:30:22.665 ************************************ 00:30:22.665 10:24:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:22.665 * Looking for test storage... 00:30:22.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:22.665 10:24:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.320 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:29.321 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:29.321 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:29.321 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.321 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:29.321 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:29.584 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:29.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:30:29.847 00:30:29.847 --- 10.0.0.2 ping statistics --- 00:30:29.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.847 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:30:29.847 00:30:29.847 --- 10.0.0.1 ping statistics --- 00:30:29.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.847 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2988684 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2988684 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 2988684 ']' 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:29.847 10:24:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:29.847 [2024-05-15 10:24:15.575105] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:30:29.847 [2024-05-15 10:24:15.575173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.847 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.109 [2024-05-15 10:24:15.646150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:30.109 [2024-05-15 10:24:15.686759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.109 [2024-05-15 10:24:15.686805] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.109 [2024-05-15 10:24:15.686813] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.109 [2024-05-15 10:24:15.686820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.109 [2024-05-15 10:24:15.686826] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.109 [2024-05-15 10:24:15.686967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.109 [2024-05-15 10:24:15.687103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.109 [2024-05-15 10:24:15.687263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.109 [2024-05-15 10:24:15.687264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.682 10:24:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:30.682 10:24:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:30:30.683 10:24:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:30.683 10:24:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:30.683 10:24:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:30.683 10:24:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:30.683 10:24:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:30.683 10:24:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:31.257 10:24:16 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:31.257 10:24:16 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:31.257 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:30:31.257 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:31.519 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:31.519 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:30:31.519 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:31.519 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:31.519 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:31.781 [2024-05-15 10:24:17.370481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.781 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.781 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:31.781 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.043 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:32.043 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:32.305 10:24:17 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.305 [2024-05-15 10:24:18.004606] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:32.305 [2024-05-15 10:24:18.004862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.305 10:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.601 10:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:30:32.601 10:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:32.601 10:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:32.601 10:24:18 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:34.043 Initializing NVMe Controllers 00:30:34.043 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:30:34.043 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:30:34.043 Initialization complete. Launching workers. 00:30:34.043 ======================================================== 00:30:34.043 Latency(us) 00:30:34.043 Device Information : IOPS MiB/s Average min max 00:30:34.043 PCIE (0000:65:00.0) NSID 1 from core 0: 79828.15 311.83 400.11 13.33 4894.81 00:30:34.043 ======================================================== 00:30:34.043 Total : 79828.15 311.83 400.11 13.33 4894.81 00:30:34.043 00:30:34.043 10:24:19 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.043 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.432 Initializing NVMe Controllers 00:30:35.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:35.432 Initialization complete. Launching workers. 00:30:35.432 ======================================================== 00:30:35.432 Latency(us) 00:30:35.432 Device Information : IOPS MiB/s Average min max 00:30:35.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.00 0.28 14163.49 730.59 46299.98 00:30:35.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 22034.44 7956.39 55867.86 00:30:35.432 ======================================================== 00:30:35.432 Total : 117.00 0.46 17258.05 730.59 55867.86 00:30:35.432 00:30:35.432 10:24:20 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.432 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.821 Initializing NVMe Controllers 00:30:36.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:36.821 Initialization complete. Launching workers. 00:30:36.821 ======================================================== 00:30:36.821 Latency(us) 00:30:36.821 Device Information : IOPS MiB/s Average min max 00:30:36.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7395.20 28.89 4327.03 807.28 12171.39 00:30:36.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3745.43 14.63 8544.35 6289.26 16498.78 00:30:36.821 ======================================================== 00:30:36.821 Total : 11140.63 43.52 5744.87 807.28 16498.78 00:30:36.821 00:30:36.821 10:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:36.821 10:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:36.821 10:24:22 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.821 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.369 Initializing NVMe Controllers 00:30:39.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.369 Controller IO queue size 128, less than required. 00:30:39.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.369 Controller IO queue size 128, less than required. 00:30:39.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:39.369 Initialization complete. Launching workers. 00:30:39.369 ======================================================== 00:30:39.369 Latency(us) 00:30:39.369 Device Information : IOPS MiB/s Average min max 00:30:39.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 806.99 201.75 166281.17 103853.97 309708.92 00:30:39.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.00 141.75 230766.53 87246.45 361991.84 00:30:39.369 ======================================================== 00:30:39.369 Total : 1373.99 343.50 192891.94 87246.45 361991.84 00:30:39.369 00:30:39.369 10:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:39.369 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.369 No valid NVMe controllers or AIO or URING devices found 00:30:39.369 Initializing NVMe Controllers 00:30:39.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.369 Controller IO queue size 128, less than required. 00:30:39.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.369 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:39.369 Controller IO queue size 128, less than required. 00:30:39.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.369 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:39.369 WARNING: Some requested NVMe devices were skipped 00:30:39.369 10:24:24 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:39.369 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.919 Initializing NVMe Controllers 00:30:41.919 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.919 Controller IO queue size 128, less than required. 00:30:41.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.919 Controller IO queue size 128, less than required. 00:30:41.919 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.919 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:41.919 Initialization complete. Launching workers. 00:30:41.919 00:30:41.919 ==================== 00:30:41.919 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:41.919 TCP transport: 00:30:41.919 polls: 40130 00:30:41.919 idle_polls: 15433 00:30:41.919 sock_completions: 24697 00:30:41.919 nvme_completions: 3381 00:30:41.919 submitted_requests: 5100 00:30:41.919 queued_requests: 1 00:30:41.919 00:30:41.919 ==================== 00:30:41.919 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:41.919 TCP transport: 00:30:41.919 polls: 42860 00:30:41.919 idle_polls: 18260 00:30:41.919 sock_completions: 24600 00:30:41.919 nvme_completions: 3349 00:30:41.919 submitted_requests: 5050 00:30:41.919 queued_requests: 1 00:30:41.919 ======================================================== 00:30:41.919 Latency(us) 00:30:41.919 Device Information : IOPS MiB/s Average min max 00:30:41.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 843.57 210.89 155884.85 87147.74 217081.90 00:30:41.919 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 835.58 208.90 157799.51 90791.12 278940.70 00:30:41.919 ======================================================== 00:30:41.919 Total : 1679.15 419.79 156837.62 87147.74 278940.70 00:30:41.919 00:30:41.919 10:24:27 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:41.919 10:24:27 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.180 10:24:27 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:42.180 10:24:27 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:30:42.180 10:24:27 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=778f465f-a143-4c4c-a044-88f35bb03127 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 778f465f-a143-4c4c-a044-88f35bb03127 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=778f465f-a143-4c4c-a044-88f35bb03127 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:30:43.126 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:43.388 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:30:43.388 { 00:30:43.388 "uuid": "778f465f-a143-4c4c-a044-88f35bb03127", 00:30:43.388 "name": "lvs_0", 00:30:43.388 "base_bdev": "Nvme0n1", 00:30:43.388 "total_data_clusters": 457407, 00:30:43.388 "free_clusters": 457407, 00:30:43.388 "block_size": 512, 00:30:43.388 "cluster_size": 4194304 00:30:43.388 } 00:30:43.388 ]' 00:30:43.388 10:24:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="778f465f-a143-4c4c-a044-88f35bb03127") .free_clusters' 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=457407 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="778f465f-a143-4c4c-a044-88f35bb03127") .cluster_size' 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=1829628 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 1829628 00:30:43.388 1829628 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:43.388 10:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 778f465f-a143-4c4c-a044-88f35bb03127 lbd_0 20480 00:30:43.649 10:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=cf7766ca-5b2a-4e03-93d5-abf52e7c9537 00:30:43.649 10:24:29 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore cf7766ca-5b2a-4e03-93d5-abf52e7c9537 lvs_n_0 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=25185191-8f9e-4245-84b9-ee091dc4532f 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 25185191-8f9e-4245-84b9-ee091dc4532f 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_uuid=25185191-8f9e-4245-84b9-ee091dc4532f 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_info 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local fc 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local cs 00:30:45.569 10:24:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:30:45.569 { 00:30:45.569 "uuid": "778f465f-a143-4c4c-a044-88f35bb03127", 00:30:45.569 "name": "lvs_0", 00:30:45.569 "base_bdev": "Nvme0n1", 00:30:45.569 "total_data_clusters": 457407, 00:30:45.569 "free_clusters": 452287, 00:30:45.569 "block_size": 512, 00:30:45.569 "cluster_size": 4194304 00:30:45.569 }, 00:30:45.569 { 00:30:45.569 "uuid": "25185191-8f9e-4245-84b9-ee091dc4532f", 00:30:45.569 "name": "lvs_n_0", 00:30:45.569 "base_bdev": "cf7766ca-5b2a-4e03-93d5-abf52e7c9537", 00:30:45.569 "total_data_clusters": 5114, 00:30:45.569 "free_clusters": 5114, 00:30:45.569 "block_size": 512, 00:30:45.569 "cluster_size": 4194304 00:30:45.569 } 00:30:45.569 ]' 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="25185191-8f9e-4245-84b9-ee091dc4532f") .free_clusters' 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # fc=5114 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="25185191-8f9e-4245-84b9-ee091dc4532f") .cluster_size' 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # cs=4194304 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # free_mb=20456 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1371 -- # echo 20456 00:30:45.569 20456 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 25185191-8f9e-4245-84b9-ee091dc4532f lbd_nest_0 20456 00:30:45.569 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=723273a4-0a06-42d7-87e9-bce8465c4f8e 00:30:45.570 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:45.832 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:45.832 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 723273a4-0a06-42d7-87e9-bce8465c4f8e 00:30:45.832 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.094 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:46.094 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:46.094 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:46.094 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.094 10:24:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.094 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.349 Initializing NVMe Controllers 00:30:58.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.349 Initialization complete. Launching workers. 00:30:58.349 ======================================================== 00:30:58.349 Latency(us) 00:30:58.349 Device Information : IOPS MiB/s Average min max 00:30:58.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.49 0.02 21124.82 505.20 49309.71 00:30:58.349 ======================================================== 00:30:58.349 Total : 47.49 0.02 21124.82 505.20 49309.71 00:30:58.349 00:30:58.349 10:24:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:58.349 10:24:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:58.349 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.365 Initializing NVMe Controllers 00:31:08.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.365 Initialization complete. Launching workers. 00:31:08.365 ======================================================== 00:31:08.365 Latency(us) 00:31:08.365 Device Information : IOPS MiB/s Average min max 00:31:08.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.20 10.28 12172.98 6020.11 47887.45 00:31:08.365 ======================================================== 00:31:08.365 Total : 82.20 10.28 12172.98 6020.11 47887.45 00:31:08.365 00:31:08.365 10:24:52 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:08.365 10:24:52 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.365 10:24:52 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.365 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.390 Initializing NVMe Controllers 00:31:18.390 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.390 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.390 Initialization complete. Launching workers. 00:31:18.390 ======================================================== 00:31:18.390 Latency(us) 00:31:18.390 Device Information : IOPS MiB/s Average min max 00:31:18.390 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7111.65 3.47 4499.99 516.53 12119.87 00:31:18.390 ======================================================== 00:31:18.390 Total : 7111.65 3.47 4499.99 516.53 12119.87 00:31:18.390 00:31:18.390 10:25:02 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:18.390 10:25:02 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.390 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.405 Initializing NVMe Controllers 00:31:28.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.405 Initialization complete. Launching workers. 00:31:28.405 ======================================================== 00:31:28.405 Latency(us) 00:31:28.405 Device Information : IOPS MiB/s Average min max 00:31:28.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1470.10 183.76 21790.28 1386.69 62754.99 00:31:28.405 ======================================================== 00:31:28.405 Total : 1470.10 183.76 21790.28 1386.69 62754.99 00:31:28.405 00:31:28.405 10:25:13 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:28.405 10:25:13 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:28.405 10:25:13 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.405 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.421 Initializing NVMe Controllers 00:31:38.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.421 Controller IO queue size 128, less than required. 00:31:38.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:38.421 Initialization complete. Launching workers. 00:31:38.421 ======================================================== 00:31:38.421 Latency(us) 00:31:38.421 Device Information : IOPS MiB/s Average min max 00:31:38.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15890.35 7.76 8060.15 2572.21 21726.76 00:31:38.421 ======================================================== 00:31:38.421 Total : 15890.35 7.76 8060.15 2572.21 21726.76 00:31:38.421 00:31:38.421 10:25:23 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:38.421 10:25:23 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.421 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.486 Initializing NVMe Controllers 00:31:48.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:48.486 Controller IO queue size 128, less than required. 00:31:48.486 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:48.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:48.486 Initialization complete. Launching workers. 00:31:48.486 ======================================================== 00:31:48.486 Latency(us) 00:31:48.486 Device Information : IOPS MiB/s Average min max 00:31:48.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1093.20 136.65 117901.91 24141.58 278410.72 00:31:48.486 ======================================================== 00:31:48.486 Total : 1093.20 136.65 117901.91 24141.58 278410.72 00:31:48.486 00:31:48.486 10:25:34 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.748 10:25:34 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 723273a4-0a06-42d7-87e9-bce8465c4f8e 00:31:50.139 10:25:35 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:50.401 10:25:36 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cf7766ca-5b2a-4e03-93d5-abf52e7c9537 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:50.663 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:50.663 rmmod nvme_tcp 00:31:50.926 rmmod nvme_fabrics 00:31:50.926 rmmod nvme_keyring 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2988684 ']' 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2988684 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 2988684 ']' 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 2988684 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2988684 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2988684' 00:31:50.926 killing process with pid 2988684 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 2988684 00:31:50.926 [2024-05-15 10:25:36.585396] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:50.926 10:25:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 2988684 00:31:52.845 10:25:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:52.845 10:25:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:52.846 10:25:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:52.846 10:25:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:52.846 10:25:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:52.846 10:25:38 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.846 10:25:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:52.846 10:25:38 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.827 10:25:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:54.827 00:31:54.827 real 1m32.625s 00:31:54.827 user 5m28.437s 00:31:54.827 sys 0m13.181s 00:31:54.827 10:25:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:54.827 10:25:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.827 ************************************ 00:31:54.827 END TEST nvmf_perf 00:31:54.827 ************************************ 00:31:55.090 10:25:40 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:55.090 10:25:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:31:55.090 10:25:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:55.090 10:25:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:55.090 ************************************ 00:31:55.090 START TEST nvmf_fio_host 00:31:55.090 ************************************ 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:55.090 * Looking for test storage... 00:31:55.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.090 10:25:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:03.250 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:03.250 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:03.250 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:03.250 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:03.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:03.251 10:25:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:03.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:32:03.251 00:32:03.251 --- 10.0.0.2 ping statistics --- 00:32:03.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.251 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:32:03.251 00:32:03.251 --- 10.0.0.1 ping statistics --- 00:32:03.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.251 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3008418 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3008418 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 3008418 ']' 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:03.251 10:25:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.251 [2024-05-15 10:25:48.232168] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:32:03.251 [2024-05-15 10:25:48.232237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.251 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.251 [2024-05-15 10:25:48.303217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:03.251 [2024-05-15 10:25:48.343667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.251 [2024-05-15 10:25:48.343713] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.251 [2024-05-15 10:25:48.343726] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.251 [2024-05-15 10:25:48.343734] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.251 [2024-05-15 10:25:48.343740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.251 [2024-05-15 10:25:48.343888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.251 [2024-05-15 10:25:48.344008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.251 [2024-05-15 10:25:48.344168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.251 [2024-05-15 10:25:48.344169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.251 [2024-05-15 10:25:49.021860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:03.251 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 Malloc1 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 [2024-05-15 10:25:49.109079] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:03.514 [2024-05-15 10:25:49.109305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:03.514 10:25:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:03.777 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:03.777 fio-3.35 00:32:03.777 Starting 1 thread 00:32:03.777 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.364 00:32:06.364 test: (groupid=0, jobs=1): err= 0: pid=3008818: Wed May 15 10:25:51 2024 00:32:06.364 read: IOPS=12.3k, BW=48.0MiB/s (50.4MB/s)(96.2MiB/2004msec) 00:32:06.364 slat (usec): min=2, max=250, avg= 2.25, stdev= 2.19 00:32:06.364 clat (usec): min=3027, max=18180, avg=6144.05, stdev=1586.06 00:32:06.364 lat (usec): min=3030, max=18182, avg=6146.30, stdev=1586.25 00:32:06.364 clat percentiles (usec): 00:32:06.364 | 1.00th=[ 3982], 5.00th=[ 4490], 10.00th=[ 4752], 20.00th=[ 5080], 00:32:06.364 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 6128], 00:32:06.364 | 70.00th=[ 6390], 80.00th=[ 6849], 90.00th=[ 7832], 95.00th=[ 8717], 00:32:06.364 | 99.00th=[12911], 99.50th=[14484], 99.90th=[17433], 99.95th=[17957], 00:32:06.364 | 99.99th=[17957] 00:32:06.364 bw ( KiB/s): min=34200, max=55048, per=99.90%, avg=49122.00, stdev=10008.24, samples=4 00:32:06.364 iops : min= 8550, max=13762, avg=12280.50, stdev=2502.06, samples=4 00:32:06.364 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(96.0MiB/2004msec); 0 zone resets 00:32:06.364 slat (usec): min=2, max=246, avg= 2.33, stdev= 1.72 00:32:06.364 clat (usec): min=2218, max=16153, avg=4204.58, stdev=1205.66 00:32:06.364 lat (usec): min=2220, max=16175, avg=4206.91, stdev=1205.96 00:32:06.364 clat percentiles (usec): 00:32:06.364 | 1.00th=[ 2606], 5.00th=[ 2900], 10.00th=[ 3097], 20.00th=[ 3359], 00:32:06.364 | 30.00th=[ 3556], 40.00th=[ 3752], 50.00th=[ 3916], 60.00th=[ 4113], 00:32:06.364 | 70.00th=[ 4293], 80.00th=[ 4752], 90.00th=[ 5997], 95.00th=[ 6456], 00:32:06.364 | 99.00th=[ 8029], 99.50th=[ 9503], 99.90th=[13173], 99.95th=[14615], 00:32:06.364 | 99.99th=[15795] 00:32:06.364 bw ( KiB/s): min=35080, max=55352, per=99.98%, avg=49024.00, stdev=9438.70, samples=4 00:32:06.364 iops : min= 8770, max=13838, avg=12256.00, stdev=2359.68, samples=4 00:32:06.364 lat (msec) : 4=27.63%, 10=70.85%, 20=1.52% 00:32:06.364 cpu : usr=74.74%, sys=20.27%, ctx=12, majf=0, minf=5 00:32:06.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:06.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.364 issued rwts: total=24635,24565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.364 00:32:06.364 Run status group 0 (all jobs): 00:32:06.364 READ: bw=48.0MiB/s (50.4MB/s), 48.0MiB/s-48.0MiB/s (50.4MB/s-50.4MB/s), io=96.2MiB (101MB), run=2004-2004msec 00:32:06.364 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=96.0MiB (101MB), run=2004-2004msec 00:32:06.364 10:25:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:06.364 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:06.364 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:32:06.364 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:06.364 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:32:06.364 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:06.365 10:25:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:06.631 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:06.631 fio-3.35 00:32:06.631 Starting 1 thread 00:32:06.631 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.178 00:32:09.178 test: (groupid=0, jobs=1): err= 0: pid=3009550: Wed May 15 10:25:54 2024 00:32:09.178 read: IOPS=7985, BW=125MiB/s (131MB/s)(250MiB/2004msec) 00:32:09.178 slat (usec): min=3, max=111, avg= 3.72, stdev= 1.49 00:32:09.178 clat (usec): min=2948, max=42169, avg=9952.47, stdev=3393.01 00:32:09.178 lat (usec): min=2952, max=42173, avg=9956.19, stdev=3393.50 00:32:09.178 clat percentiles (usec): 00:32:09.178 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7439], 00:32:09.178 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10159], 00:32:09.178 | 70.00th=[10945], 80.00th=[11863], 90.00th=[13698], 95.00th=[15533], 00:32:09.178 | 99.00th=[22938], 99.50th=[24249], 99.90th=[33162], 99.95th=[34341], 00:32:09.178 | 99.99th=[38536] 00:32:09.178 bw ( KiB/s): min=52032, max=69472, per=49.50%, avg=63240.00, stdev=7677.79, samples=4 00:32:09.178 iops : min= 3252, max= 4342, avg=3952.50, stdev=479.86, samples=4 00:32:09.178 write: IOPS=4624, BW=72.3MiB/s (75.8MB/s)(129MiB/1792msec); 0 zone resets 00:32:09.178 slat (usec): min=40, max=325, avg=41.17, stdev= 7.60 00:32:09.178 clat (usec): min=2859, max=37344, avg=10561.31, stdev=3333.79 00:32:09.178 lat (usec): min=2899, max=37390, avg=10602.49, stdev=3338.08 00:32:09.178 clat percentiles (usec): 00:32:09.178 | 1.00th=[ 6521], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 8586], 00:32:09.178 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10552], 00:32:09.178 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12780], 95.00th=[14353], 00:32:09.178 | 99.00th=[31065], 99.50th=[32900], 99.90th=[34866], 99.95th=[34866], 00:32:09.178 | 99.99th=[37487] 00:32:09.178 bw ( KiB/s): min=53472, max=72992, per=89.10%, avg=65928.00, stdev=8572.23, samples=4 00:32:09.178 iops : min= 3342, max= 4562, avg=4120.50, stdev=535.76, samples=4 00:32:09.178 lat (msec) : 4=0.17%, 10=55.10%, 20=42.70%, 50=2.03% 00:32:09.178 cpu : usr=82.38%, sys=12.93%, ctx=14, majf=0, minf=24 00:32:09.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:09.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:09.178 issued rwts: total=16002,8287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:09.178 00:32:09.178 Run status group 0 (all jobs): 00:32:09.178 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=250MiB (262MB), run=2004-2004msec 00:32:09.178 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=129MiB (136MB), run=1792-1792msec 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=() 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # local bdfs 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:65:00.0 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.178 Nvme0n1 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.178 10:25:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=5b71d986-aa1f-49a5-9d3f-eecc0595b506 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 5b71d986-aa1f-49a5-9d3f-eecc0595b506 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=5b71d986-aa1f-49a5-9d3f-eecc0595b506 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.752 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:32:09.752 { 00:32:09.752 "uuid": "5b71d986-aa1f-49a5-9d3f-eecc0595b506", 00:32:09.752 "name": "lvs_0", 00:32:09.752 "base_bdev": "Nvme0n1", 00:32:09.752 "total_data_clusters": 1787, 00:32:09.753 "free_clusters": 1787, 00:32:09.753 "block_size": 512, 00:32:09.753 "cluster_size": 1073741824 00:32:09.753 } 00:32:09.753 ]' 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="5b71d986-aa1f-49a5-9d3f-eecc0595b506") .free_clusters' 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=1787 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="5b71d986-aa1f-49a5-9d3f-eecc0595b506") .cluster_size' 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=1073741824 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=1829888 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 1829888 00:32:09.753 1829888 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 1829888 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.753 43ef8902-b921-450b-92c5-b41fc1ebab06 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:32:09.753 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:10.061 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:10.061 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:10.061 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:10.061 10:25:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:10.329 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:10.329 fio-3.35 00:32:10.329 Starting 1 thread 00:32:10.329 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.881 00:32:12.881 test: (groupid=0, jobs=1): err= 0: pid=3010410: Wed May 15 10:25:58 2024 00:32:12.881 read: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(80.8MiB/2005msec) 00:32:12.881 slat (usec): min=2, max=111, avg= 2.29, stdev= 1.03 00:32:12.881 clat (usec): min=4603, max=24924, avg=7031.01, stdev=1070.75 00:32:12.881 lat (usec): min=4606, max=24929, avg=7033.30, stdev=1070.85 00:32:12.881 clat percentiles (usec): 00:32:12.881 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6390], 00:32:12.881 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7111], 00:32:12.881 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8160], 00:32:12.881 | 99.00th=[ 9241], 99.50th=[11863], 99.90th=[22414], 99.95th=[24773], 00:32:12.881 | 99.99th=[24773] 00:32:12.881 bw ( KiB/s): min=39240, max=42048, per=99.90%, avg=41246.00, stdev=1345.80, samples=4 00:32:12.881 iops : min= 9810, max=10512, avg=10311.50, stdev=336.45, samples=4 00:32:12.881 write: IOPS=10.3k, BW=40.4MiB/s (42.3MB/s)(80.9MiB/2005msec); 0 zone resets 00:32:12.881 slat (nsec): min=2195, max=96658, avg=2383.69, stdev=721.46 00:32:12.881 clat (usec): min=1602, max=18231, avg=5304.78, stdev=801.72 00:32:12.881 lat (usec): min=1610, max=18246, avg=5307.17, stdev=801.86 00:32:12.881 clat percentiles (usec): 00:32:12.881 | 1.00th=[ 3949], 5.00th=[ 4359], 10.00th=[ 4621], 20.00th=[ 4817], 00:32:12.881 | 30.00th=[ 4948], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5407], 00:32:12.881 | 70.00th=[ 5538], 80.00th=[ 5735], 90.00th=[ 5997], 95.00th=[ 6194], 00:32:12.881 | 99.00th=[ 6915], 99.50th=[ 8356], 99.90th=[15664], 99.95th=[16909], 00:32:12.881 | 99.99th=[18220] 00:32:12.881 bw ( KiB/s): min=39808, max=42048, per=99.99%, avg=41328.00, stdev=1036.42, samples=4 00:32:12.881 iops : min= 9952, max=10512, avg=10332.00, stdev=259.11, samples=4 00:32:12.881 lat (msec) : 2=0.01%, 4=0.64%, 10=98.86%, 20=0.42%, 50=0.07% 00:32:12.881 cpu : usr=66.22%, sys=26.15%, ctx=13, majf=0, minf=14 00:32:12.881 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:12.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:12.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:12.881 issued rwts: total=20696,20718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:12.881 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:12.881 00:32:12.881 Run status group 0 (all jobs): 00:32:12.881 READ: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=80.8MiB (84.8MB), run=2005-2005msec 00:32:12.881 WRITE: bw=40.4MiB/s (42.3MB/s), 40.4MiB/s-40.4MiB/s (42.3MB/s-42.3MB/s), io=80.9MiB (84.9MB), run=2005-2005msec 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:12.881 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=13b3681b-8cab-46a4-a892-69c35b4318fc 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 13b3681b-8cab-46a4-a892-69c35b4318fc 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_uuid=13b3681b-8cab-46a4-a892-69c35b4318fc 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_info 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local fc 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local cs 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # rpc_cmd bdev_lvol_get_lvstores 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.143 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.405 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:13.405 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # lvs_info='[ 00:32:13.405 { 00:32:13.405 "uuid": "5b71d986-aa1f-49a5-9d3f-eecc0595b506", 00:32:13.405 "name": "lvs_0", 00:32:13.405 "base_bdev": "Nvme0n1", 00:32:13.405 "total_data_clusters": 1787, 00:32:13.405 "free_clusters": 0, 00:32:13.405 "block_size": 512, 00:32:13.405 "cluster_size": 1073741824 00:32:13.405 }, 00:32:13.405 { 00:32:13.405 "uuid": "13b3681b-8cab-46a4-a892-69c35b4318fc", 00:32:13.405 "name": "lvs_n_0", 00:32:13.405 "base_bdev": "43ef8902-b921-450b-92c5-b41fc1ebab06", 00:32:13.405 "total_data_clusters": 457025, 00:32:13.405 "free_clusters": 457025, 00:32:13.405 "block_size": 512, 00:32:13.405 "cluster_size": 4194304 00:32:13.405 } 00:32:13.405 ]' 00:32:13.405 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="13b3681b-8cab-46a4-a892-69c35b4318fc") .free_clusters' 00:32:13.405 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # fc=457025 00:32:13.405 10:25:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="13b3681b-8cab-46a4-a892-69c35b4318fc") .cluster_size' 00:32:13.405 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # cs=4194304 00:32:13.405 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # free_mb=1828100 00:32:13.405 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1371 -- # echo 1828100 00:32:13.405 1828100 00:32:13.405 10:25:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:32:13.405 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:13.405 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.350 e7b891fe-8da6-43af-9c75-f681a369e4e7 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:32:14.350 10:25:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:32:14.350 10:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:32:14.350 10:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:32:14.350 10:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.350 10:26:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.612 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:14.612 fio-3.35 00:32:14.612 Starting 1 thread 00:32:14.612 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.168 00:32:17.168 test: (groupid=0, jobs=1): err= 0: pid=3011424: Wed May 15 10:26:02 2024 00:32:17.168 read: IOPS=6376, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec) 00:32:17.168 slat (usec): min=2, max=109, avg= 2.32, stdev= 1.27 00:32:17.168 clat (usec): min=3600, max=18611, avg=11150.11, stdev=946.79 00:32:17.168 lat (usec): min=3619, max=18614, avg=11152.42, stdev=946.71 00:32:17.168 clat percentiles (usec): 00:32:17.168 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:32:17.168 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:32:17.168 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:32:17.168 | 99.00th=[13173], 99.50th=[13435], 99.90th=[17695], 99.95th=[18482], 00:32:17.168 | 99.99th=[18482] 00:32:17.168 bw ( KiB/s): min=24440, max=25888, per=99.87%, avg=25472.00, stdev=694.70, samples=4 00:32:17.168 iops : min= 6110, max= 6472, avg=6368.00, stdev=173.67, samples=4 00:32:17.168 write: IOPS=6379, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2008msec); 0 zone resets 00:32:17.168 slat (nsec): min=2221, max=95215, avg=2429.06, stdev=896.20 00:32:17.168 clat (usec): min=1873, max=15084, avg=8801.49, stdev=804.80 00:32:17.168 lat (usec): min=1880, max=15087, avg=8803.92, stdev=804.76 00:32:17.168 clat percentiles (usec): 00:32:17.168 | 1.00th=[ 6849], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8160], 00:32:17.168 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:32:17.168 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:32:17.168 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13566], 99.95th=[13698], 00:32:17.168 | 99.99th=[15008] 00:32:17.168 bw ( KiB/s): min=25280, max=25728, per=99.94%, avg=25504.00, stdev=198.98, samples=4 00:32:17.168 iops : min= 6320, max= 6432, avg=6376.00, stdev=49.75, samples=4 00:32:17.168 lat (msec) : 2=0.01%, 4=0.08%, 10=51.84%, 20=48.08% 00:32:17.168 cpu : usr=52.57%, sys=39.46%, ctx=57, majf=0, minf=14 00:32:17.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:17.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.168 issued rwts: total=12804,12811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.168 00:32:17.168 Run status group 0 (all jobs): 00:32:17.168 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2008-2008msec 00:32:17.168 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.5MB), run=2008-2008msec 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:17.168 10:26:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.091 10:26:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:19.353 10:26:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:21.272 rmmod nvme_tcp 00:32:21.272 rmmod nvme_fabrics 00:32:21.272 rmmod nvme_keyring 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3008418 ']' 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3008418 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 3008418 ']' 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 3008418 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:21.272 10:26:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3008418 00:32:21.272 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:21.272 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:21.272 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3008418' 00:32:21.272 killing process with pid 3008418 00:32:21.272 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 3008418 00:32:21.272 [2024-05-15 10:26:07.029828] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:21.272 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 3008418 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.569 10:26:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.490 10:26:09 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:23.490 00:32:23.490 real 0m28.522s 00:32:23.490 user 2m27.848s 00:32:23.490 sys 0m9.266s 00:32:23.490 10:26:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:23.490 10:26:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.490 ************************************ 00:32:23.490 END TEST nvmf_fio_host 00:32:23.490 ************************************ 00:32:23.490 10:26:09 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:23.490 10:26:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:32:23.490 10:26:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:23.490 10:26:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.753 ************************************ 00:32:23.753 START TEST nvmf_failover 00:32:23.753 ************************************ 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:23.753 * Looking for test storage... 00:32:23.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:23.753 10:26:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:31.911 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:31.911 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.911 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:31.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:31.912 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:31.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:32:31.912 00:32:31.912 --- 10.0.0.2 ping statistics --- 00:32:31.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.912 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:31.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.454 ms 00:32:31.912 00:32:31.912 --- 10.0.0.1 ping statistics --- 00:32:31.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.912 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3016742 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3016742 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 3016742 ']' 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:31.912 10:26:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.912 [2024-05-15 10:26:16.815397] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:32:31.912 [2024-05-15 10:26:16.815464] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:31.912 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.912 [2024-05-15 10:26:16.905218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:31.912 [2024-05-15 10:26:16.953143] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:31.912 [2024-05-15 10:26:16.953204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:31.912 [2024-05-15 10:26:16.953212] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:31.912 [2024-05-15 10:26:16.953219] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:31.912 [2024-05-15 10:26:16.953225] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:31.912 [2024-05-15 10:26:16.953319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:31.912 [2024-05-15 10:26:16.953490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:31.912 [2024-05-15 10:26:16.953589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.912 10:26:17 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:32.174 [2024-05-15 10:26:17.779762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.174 10:26:17 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:32.436 Malloc0 00:32:32.436 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:32.436 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.698 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.698 [2024-05-15 10:26:18.484851] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:32.698 [2024-05-15 10:26:18.485091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.960 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:32.960 [2024-05-15 10:26:18.657547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:32.960 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:33.220 [2024-05-15 10:26:18.826046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3017256 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3017256 /var/tmp/bdevperf.sock 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 3017256 ']' 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:33.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:33.220 10:26:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:33.481 10:26:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:33.481 10:26:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:32:33.481 10:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:33.741 NVMe0n1 00:32:33.741 10:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:34.002 00:32:34.002 10:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:34.002 10:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3017265 00:32:34.002 10:26:19 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:34.945 10:26:20 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:34.945 [2024-05-15 10:26:20.731770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.945 [2024-05-15 10:26:20.731857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:34.946 [2024-05-15 10:26:20.731980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0460 is same with the state(5) to be set 00:32:35.208 10:26:20 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:38.516 10:26:23 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:38.516 00:32:38.516 10:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:38.516 [2024-05-15 10:26:24.166422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166677] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.517 [2024-05-15 10:26:24.166869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.166998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 [2024-05-15 10:26:24.167048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f0c90 is same with the state(5) to be set 00:32:38.518 10:26:24 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:41.822 10:26:27 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.822 [2024-05-15 10:26:27.342151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.822 10:26:27 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:42.764 10:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:42.764 [2024-05-15 10:26:28.520937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.520979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.520984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.520989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.520994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.520998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521052] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.764 [2024-05-15 10:26:28.521139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 [2024-05-15 10:26:28.521346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x796610 is same with the state(5) to be set 00:32:42.765 10:26:28 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3017265 00:32:49.363 0 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3017256 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 3017256 ']' 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 3017256 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3017256 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3017256' 00:32:49.363 killing process with pid 3017256 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 3017256 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 3017256 00:32:49.363 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:49.363 [2024-05-15 10:26:18.901729] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:32:49.363 [2024-05-15 10:26:18.901785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017256 ] 00:32:49.363 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.363 [2024-05-15 10:26:18.960509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.363 [2024-05-15 10:26:18.991144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.363 Running I/O for 15 seconds... 00:32:49.363 [2024-05-15 10:26:20.732812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.363 [2024-05-15 10:26:20.732847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.363 [2024-05-15 10:26:20.732858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.363 [2024-05-15 10:26:20.732866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.732874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.364 [2024-05-15 10:26:20.732882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.732890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.364 [2024-05-15 10:26:20.732896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.732903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f13a0 is same with the state(5) to be set 00:32:49.364 [2024-05-15 10:26:20.732943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.364 [2024-05-15 10:26:20.732952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.732967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.732975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.732984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.732991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.364 [2024-05-15 10:26:20.733556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.364 [2024-05-15 10:26:20.733563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.733985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.733992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.734008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.734024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.365 [2024-05-15 10:26:20.734040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.365 [2024-05-15 10:26:20.734215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.365 [2024-05-15 10:26:20.734224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.366 [2024-05-15 10:26:20.734378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.366 [2024-05-15 10:26:20.734874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.366 [2024-05-15 10:26:20.734885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-05-15 10:26:20.734892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.734901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-05-15 10:26:20.734908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.734917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-05-15 10:26:20.734924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.734933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:20.734940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.734949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-05-15 10:26:20.734956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.734965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-05-15 10:26:20.734972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.734981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.367 [2024-05-15 10:26:20.734988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.735007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.367 [2024-05-15 10:26:20.735014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.367 [2024-05-15 10:26:20.735020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:32:49.367 [2024-05-15 10:26:20.735028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:20.735066] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2310070 was disconnected and freed. reset controller. 00:32:49.367 [2024-05-15 10:26:20.735080] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:49.367 [2024-05-15 10:26:20.735089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.367 [2024-05-15 10:26:20.738673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.367 [2024-05-15 10:26:20.738694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f13a0 (9): Bad file descriptor 00:32:49.367 [2024-05-15 10:26:20.779145] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:49.367 [2024-05-15 10:26:24.167068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.367 [2024-05-15 10:26:24.167106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.367 [2024-05-15 10:26:24.167129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.367 [2024-05-15 10:26:24.167144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.367 [2024-05-15 10:26:24.167159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f13a0 is same with the state(5) to be set 00:32:49.367 [2024-05-15 10:26:24.167779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.167988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.167995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.367 [2024-05-15 10:26:24.168188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.367 [2024-05-15 10:26:24.168197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.368 [2024-05-15 10:26:24.168722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.368 [2024-05-15 10:26:24.168728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.168988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.168997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.369 [2024-05-15 10:26:24.169377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.369 [2024-05-15 10:26:24.169387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:24.169842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.370 [2024-05-15 10:26:24.169871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.370 [2024-05-15 10:26:24.169877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39360 len:8 PRP1 0x0 PRP2 0x0 00:32:49.370 [2024-05-15 10:26:24.169884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:24.169919] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x231dc70 was disconnected and freed. reset controller. 00:32:49.370 [2024-05-15 10:26:24.169929] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:49.370 [2024-05-15 10:26:24.169937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.370 [2024-05-15 10:26:24.173580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.370 [2024-05-15 10:26:24.173604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f13a0 (9): Bad file descriptor 00:32:49.370 [2024-05-15 10:26:24.337716] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:49.370 [2024-05-15 10:26:28.523208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.370 [2024-05-15 10:26:28.523377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.370 [2024-05-15 10:26:28.523386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.371 [2024-05-15 10:26:28.523634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.523985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.523994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.524001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.524010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.524017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.524026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.371 [2024-05-15 10:26:28.524033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.371 [2024-05-15 10:26:28.524042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.372 [2024-05-15 10:26:28.524559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.372 [2024-05-15 10:26:28.524568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.524987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.524996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.525018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:49.373 [2024-05-15 10:26:28.525034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84736 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.373 [2024-05-15 10:26:28.525085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84744 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.373 [2024-05-15 10:26:28.525111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84752 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.373 [2024-05-15 10:26:28.525135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84760 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.373 [2024-05-15 10:26:28.525161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84768 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.373 [2024-05-15 10:26:28.525186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84776 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.373 [2024-05-15 10:26:28.525206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.373 [2024-05-15 10:26:28.525211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.373 [2024-05-15 10:26:28.525217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84784 len:8 PRP1 0x0 PRP2 0x0 00:32:49.373 [2024-05-15 10:26:28.525224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84792 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84800 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84808 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84816 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84824 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84832 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.525389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.525394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.525400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84840 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.525407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.539316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.539326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84848 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.539335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:49.374 [2024-05-15 10:26:28.539349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:49.374 [2024-05-15 10:26:28.539356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84856 len:8 PRP1 0x0 PRP2 0x0 00:32:49.374 [2024-05-15 10:26:28.539363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539404] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2314220 was disconnected and freed. reset controller. 00:32:49.374 [2024-05-15 10:26:28.539413] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:49.374 [2024-05-15 10:26:28.539441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.374 [2024-05-15 10:26:28.539449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.374 [2024-05-15 10:26:28.539466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.374 [2024-05-15 10:26:28.539481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:49.374 [2024-05-15 10:26:28.539496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:49.374 [2024-05-15 10:26:28.539503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:49.374 [2024-05-15 10:26:28.539530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f13a0 (9): Bad file descriptor 00:32:49.374 [2024-05-15 10:26:28.543134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:49.374 [2024-05-15 10:26:28.703983] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:49.374 00:32:49.374 Latency(us) 00:32:49.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.374 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:49.374 Verification LBA range: start 0x0 length 0x4000 00:32:49.374 NVMe0n1 : 15.00 11425.02 44.63 898.73 0.00 10357.53 1365.33 28180.48 00:32:49.374 =================================================================================================================== 00:32:49.374 Total : 11425.02 44.63 898.73 0.00 10357.53 1365.33 28180.48 00:32:49.374 Received shutdown signal, test time was about 15.000000 seconds 00:32:49.374 00:32:49.374 Latency(us) 00:32:49.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.374 =================================================================================================================== 00:32:49.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3020268 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3020268 /var/tmp/bdevperf.sock 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 3020268 ']' 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:49.374 10:26:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:49.947 10:26:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:49.947 10:26:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:32:49.947 10:26:35 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:50.209 [2024-05-15 10:26:35.874783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:50.209 10:26:35 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:50.501 [2024-05-15 10:26:36.039183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:50.501 10:26:36 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:50.762 NVMe0n1 00:32:50.762 10:26:36 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:51.334 00:32:51.335 10:26:36 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:51.597 00:32:51.597 10:26:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:51.597 10:26:37 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:51.858 10:26:37 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:51.858 10:26:37 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:55.251 10:26:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:55.251 10:26:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:55.251 10:26:40 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3021292 00:32:55.251 10:26:40 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3021292 00:32:55.251 10:26:40 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:56.197 0 00:32:56.197 10:26:41 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:56.197 [2024-05-15 10:26:34.965550] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:32:56.197 [2024-05-15 10:26:34.965649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3020268 ] 00:32:56.197 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.197 [2024-05-15 10:26:35.027973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.197 [2024-05-15 10:26:35.057394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.197 [2024-05-15 10:26:37.529993] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:56.197 [2024-05-15 10:26:37.530035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.197 [2024-05-15 10:26:37.530046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.197 [2024-05-15 10:26:37.530057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.197 [2024-05-15 10:26:37.530064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.198 [2024-05-15 10:26:37.530072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.198 [2024-05-15 10:26:37.530080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.198 [2024-05-15 10:26:37.530089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.198 [2024-05-15 10:26:37.530096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.198 [2024-05-15 10:26:37.530104] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:56.198 [2024-05-15 10:26:37.530125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:56.198 [2024-05-15 10:26:37.530139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ff3a0 (9): Bad file descriptor 00:32:56.198 [2024-05-15 10:26:37.551623] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:56.198 Running I/O for 1 seconds... 00:32:56.198 00:32:56.198 Latency(us) 00:32:56.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.198 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:56.198 Verification LBA range: start 0x0 length 0x4000 00:32:56.198 NVMe0n1 : 1.01 11582.75 45.25 0.00 0.00 10987.39 1488.21 26323.63 00:32:56.198 =================================================================================================================== 00:32:56.198 Total : 11582.75 45.25 0.00 0.00 10987.39 1488.21 26323.63 00:32:56.198 10:26:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:56.198 10:26:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:56.460 10:26:42 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:56.460 10:26:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:56.460 10:26:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:56.722 10:26:42 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:56.983 10:26:42 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3020268 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 3020268 ']' 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 3020268 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3020268 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3020268' 00:33:00.291 killing process with pid 3020268 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 3020268 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 3020268 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:00.291 10:26:45 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:00.291 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:00.291 rmmod nvme_tcp 00:33:00.553 rmmod nvme_fabrics 00:33:00.553 rmmod nvme_keyring 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3016742 ']' 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3016742 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 3016742 ']' 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 3016742 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3016742 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3016742' 00:33:00.553 killing process with pid 3016742 00:33:00.553 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 3016742 00:33:00.554 [2024-05-15 10:26:46.202091] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 3016742 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.554 10:26:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.105 10:26:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:03.105 00:33:03.105 real 0m39.073s 00:33:03.105 user 1m59.932s 00:33:03.105 sys 0m8.183s 00:33:03.105 10:26:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:03.105 10:26:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:03.105 ************************************ 00:33:03.105 END TEST nvmf_failover 00:33:03.105 ************************************ 00:33:03.105 10:26:48 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:03.105 10:26:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:03.105 10:26:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:03.105 10:26:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:03.105 ************************************ 00:33:03.105 START TEST nvmf_host_discovery 00:33:03.105 ************************************ 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:03.105 * Looking for test storage... 00:33:03.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.105 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:03.106 10:26:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:11.265 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:11.265 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:11.265 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:11.265 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.265 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:11.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:33:11.265 00:33:11.265 --- 10.0.0.2 ping statistics --- 00:33:11.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.266 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:33:11.266 00:33:11.266 --- 10.0.0.1 ping statistics --- 00:33:11.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.266 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3026452 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3026452 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 3026452 ']' 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:11.266 10:26:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 [2024-05-15 10:26:55.961308] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:33:11.266 [2024-05-15 10:26:55.961367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.266 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.266 [2024-05-15 10:26:56.022459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.266 [2024-05-15 10:26:56.055050] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.266 [2024-05-15 10:26:56.055090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.266 [2024-05-15 10:26:56.055096] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.266 [2024-05-15 10:26:56.055101] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.266 [2024-05-15 10:26:56.055105] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.266 [2024-05-15 10:26:56.055123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 [2024-05-15 10:26:56.187340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 [2024-05-15 10:26:56.199328] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:11.266 [2024-05-15 10:26:56.199613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 null0 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 null1 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3026602 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3026602 /tmp/host.sock 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 3026602 ']' 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:11.266 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:11.266 10:26:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.266 [2024-05-15 10:26:56.293603] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:33:11.266 [2024-05-15 10:26:56.293664] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3026602 ] 00:33:11.266 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.266 [2024-05-15 10:26:56.357927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.266 [2024-05-15 10:26:56.397363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.529 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:11.529 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:33:11.529 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:11.529 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:11.529 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:11.530 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 [2024-05-15 10:26:57.406503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.793 10:26:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:12.056 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:12.056 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:33:12.056 10:26:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:33:12.630 [2024-05-15 10:26:58.138677] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:12.630 [2024-05-15 10:26:58.138702] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:12.630 [2024-05-15 10:26:58.138717] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:12.630 [2024-05-15 10:26:58.228996] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:12.891 [2024-05-15 10:26:58.451412] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:12.891 [2024-05-15 10:26:58.451435] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:12.891 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:13.154 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.155 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.417 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.417 [2024-05-15 10:26:58.958596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:13.417 [2024-05-15 10:26:58.959634] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:13.418 [2024-05-15 10:26:58.959659] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.418 10:26:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.418 [2024-05-15 10:26:59.048929] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:13.418 10:26:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:33:13.680 [2024-05-15 10:26:59.357548] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:13.680 [2024-05-15 10:26:59.357571] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:13.680 [2024-05-15 10:26:59.357577] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.627 [2024-05-15 10:27:00.222856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.627 [2024-05-15 10:27:00.222883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.627 [2024-05-15 10:27:00.222894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.627 [2024-05-15 10:27:00.222901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.627 [2024-05-15 10:27:00.222909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.627 [2024-05-15 10:27:00.222917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.627 [2024-05-15 10:27:00.222924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:14.627 [2024-05-15 10:27:00.222931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:14.627 [2024-05-15 10:27:00.222939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.627 [2024-05-15 10:27:00.223129] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:14.627 [2024-05-15 10:27:00.223145] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.627 [2024-05-15 10:27:00.232864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.627 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.627 [2024-05-15 10:27:00.242904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.627 [2024-05-15 10:27:00.243584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.627 [2024-05-15 10:27:00.244025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.244037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.244046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.244065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.244108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.244117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.244126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.244141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.628 [2024-05-15 10:27:00.252961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.628 [2024-05-15 10:27:00.253633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.254206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.254218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.254227] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.254245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.254270] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.254283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.254296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.254311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 [2024-05-15 10:27:00.263014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.628 [2024-05-15 10:27:00.263644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.264239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.264251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.264261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.264279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.264401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.264410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.264418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.264433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 [2024-05-15 10:27:00.273069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.628 [2024-05-15 10:27:00.273763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.274506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.274542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.274553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.274571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.274597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.274604] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.274612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.274627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:14.628 [2024-05-15 10:27:00.283125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.628 [2024-05-15 10:27:00.283785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.284502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.284543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.284555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.284574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.284613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.284622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.284629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.284644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.628 [2024-05-15 10:27:00.293385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.628 [2024-05-15 10:27:00.293934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.294618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.294655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.294666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.294687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.294715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.294724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.294732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.294746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 [2024-05-15 10:27:00.303442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:14.628 [2024-05-15 10:27:00.303946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.304550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.628 [2024-05-15 10:27:00.304587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21eb9b0 with addr=10.0.0.2, port=4420 00:33:14.628 [2024-05-15 10:27:00.304597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21eb9b0 is same with the state(5) to be set 00:33:14.628 [2024-05-15 10:27:00.304616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21eb9b0 (9): Bad file descriptor 00:33:14.628 [2024-05-15 10:27:00.304644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:14.628 [2024-05-15 10:27:00.304652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:14.628 [2024-05-15 10:27:00.304660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:14.628 [2024-05-15 10:27:00.304691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:14.628 [2024-05-15 10:27:00.312665] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:14.628 [2024-05-15 10:27:00.312684] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.628 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.629 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.891 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:14.892 10:27:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.281 [2024-05-15 10:27:01.676165] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:16.281 [2024-05-15 10:27:01.676190] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:16.282 [2024-05-15 10:27:01.676204] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:16.282 [2024-05-15 10:27:01.765472] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:16.544 [2024-05-15 10:27:02.078655] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:16.544 [2024-05-15 10:27:02.078690] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.544 request: 00:33:16.544 { 00:33:16.544 "name": "nvme", 00:33:16.544 "trtype": "tcp", 00:33:16.544 "traddr": "10.0.0.2", 00:33:16.544 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:16.544 "adrfam": "ipv4", 00:33:16.544 "trsvcid": "8009", 00:33:16.544 "wait_for_attach": true, 00:33:16.544 "method": "bdev_nvme_start_discovery", 00:33:16.544 "req_id": 1 00:33:16.544 } 00:33:16.544 Got JSON-RPC error response 00:33:16.544 response: 00:33:16.544 { 00:33:16.544 "code": -17, 00:33:16.544 "message": "File exists" 00:33:16.544 } 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.544 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.545 request: 00:33:16.545 { 00:33:16.545 "name": "nvme_second", 00:33:16.545 "trtype": "tcp", 00:33:16.545 "traddr": "10.0.0.2", 00:33:16.545 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:16.545 "adrfam": "ipv4", 00:33:16.545 "trsvcid": "8009", 00:33:16.545 "wait_for_attach": true, 00:33:16.545 "method": "bdev_nvme_start_discovery", 00:33:16.545 "req_id": 1 00:33:16.545 } 00:33:16.545 Got JSON-RPC error response 00:33:16.545 response: 00:33:16.545 { 00:33:16.545 "code": -17, 00:33:16.545 "message": "File exists" 00:33:16.545 } 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:16.545 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:33:16.836 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:33:16.836 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:16.836 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:16.836 10:27:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.782 [2024-05-15 10:27:03.346173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.782 [2024-05-15 10:27:03.346729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.782 [2024-05-15 10:27:03.346743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2206020 with addr=10.0.0.2, port=8010 00:33:17.782 [2024-05-15 10:27:03.346758] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:17.782 [2024-05-15 10:27:03.346766] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:17.782 [2024-05-15 10:27:03.346774] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:18.726 [2024-05-15 10:27:04.348728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.726 [2024-05-15 10:27:04.349266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.727 [2024-05-15 10:27:04.349277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2206020 with addr=10.0.0.2, port=8010 00:33:18.727 [2024-05-15 10:27:04.349288] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:18.727 [2024-05-15 10:27:04.349297] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:18.727 [2024-05-15 10:27:04.349308] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:19.670 [2024-05-15 10:27:05.350521] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:19.670 request: 00:33:19.670 { 00:33:19.670 "name": "nvme_second", 00:33:19.670 "trtype": "tcp", 00:33:19.670 "traddr": "10.0.0.2", 00:33:19.670 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:19.670 "adrfam": "ipv4", 00:33:19.670 "trsvcid": "8010", 00:33:19.670 "attach_timeout_ms": 3000, 00:33:19.670 "method": "bdev_nvme_start_discovery", 00:33:19.670 "req_id": 1 00:33:19.670 } 00:33:19.670 Got JSON-RPC error response 00:33:19.670 response: 00:33:19.670 { 00:33:19.670 "code": -110, 00:33:19.670 "message": "Connection timed out" 00:33:19.670 } 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:19.670 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3026602 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:19.671 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:19.671 rmmod nvme_tcp 00:33:19.671 rmmod nvme_fabrics 00:33:19.671 rmmod nvme_keyring 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3026452 ']' 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3026452 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 3026452 ']' 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 3026452 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3026452 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3026452' 00:33:19.931 killing process with pid 3026452 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 3026452 00:33:19.931 [2024-05-15 10:27:05.551798] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 3026452 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.931 10:27:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.475 10:27:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:22.475 00:33:22.475 real 0m19.243s 00:33:22.475 user 0m22.551s 00:33:22.475 sys 0m6.769s 00:33:22.475 10:27:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:22.475 10:27:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 ************************************ 00:33:22.475 END TEST nvmf_host_discovery 00:33:22.475 ************************************ 00:33:22.475 10:27:07 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:22.475 10:27:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:33:22.475 10:27:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:33:22.475 10:27:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.475 ************************************ 00:33:22.475 START TEST nvmf_host_multipath_status 00:33:22.475 ************************************ 00:33:22.475 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:22.475 * Looking for test storage... 00:33:22.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:33:22.476 10:27:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:30.621 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:30.621 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:30.621 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:30.621 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.621 10:27:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:30.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:33:30.622 00:33:30.622 --- 10.0.0.2 ping statistics --- 00:33:30.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.622 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.474 ms 00:33:30.622 00:33:30.622 --- 10.0.0.1 ping statistics --- 00:33:30.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.622 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3033054 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3033054 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 3033054 ']' 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:30.622 10:27:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:30.622 [2024-05-15 10:27:15.399497] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:33:30.622 [2024-05-15 10:27:15.399565] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.622 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.622 [2024-05-15 10:27:15.470306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:30.622 [2024-05-15 10:27:15.508893] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.622 [2024-05-15 10:27:15.508939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.622 [2024-05-15 10:27:15.508947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.622 [2024-05-15 10:27:15.508954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.622 [2024-05-15 10:27:15.508960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.622 [2024-05-15 10:27:15.509133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.622 [2024-05-15 10:27:15.509135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3033054 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:30.622 [2024-05-15 10:27:16.349410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.622 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:30.884 Malloc0 00:33:30.884 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:31.146 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.146 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.408 [2024-05-15 10:27:16.979991] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:31.408 [2024-05-15 10:27:16.980216] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.408 10:27:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:31.408 [2024-05-15 10:27:17.132541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3033420 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3033420 /var/tmp/bdevperf.sock 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 3033420 ']' 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:31.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:33:31.408 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.669 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:33:31.669 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:33:31.669 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:31.930 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:32.192 Nvme0n1 00:33:32.192 10:27:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:32.764 Nvme0n1 00:33:32.764 10:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:32.764 10:27:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:34.681 10:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:34.681 10:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:34.942 10:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:35.202 10:27:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.147 10:27:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.409 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:36.409 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.409 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.409 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.671 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.933 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.933 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.933 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.933 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.194 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.194 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:37.194 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.194 10:27:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:37.455 10:27:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:38.397 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:38.397 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:38.397 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.397 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.659 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:38.920 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.920 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:38.920 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.920 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.182 10:27:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:39.444 10:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.444 10:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:39.444 10:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:39.706 10:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:39.706 10:27:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.119 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.381 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.381 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.381 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.381 10:27:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.381 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.381 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:41.381 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.381 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.642 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.642 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:41.642 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.642 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:41.903 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.903 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:41.903 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:41.903 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:42.165 10:27:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:43.111 10:27:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:43.111 10:27:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:43.111 10:27:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.111 10:27:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:43.372 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.372 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:43.372 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.372 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.633 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.895 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.895 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:43.895 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.895 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:44.156 10:27:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:44.417 10:27:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:44.417 10:27:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.806 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.067 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.067 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.067 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.067 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.067 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.067 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:46.327 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.327 10:27:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.327 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.327 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:46.327 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.327 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.588 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.588 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:46.588 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:46.588 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:46.848 10:27:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:47.795 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:47.795 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:47.795 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.796 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.057 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.057 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:48.057 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.057 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.320 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.320 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.320 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.320 10:27:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.320 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.320 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.320 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.320 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.581 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.581 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:48.581 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:48.581 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.842 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.842 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:48.842 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.842 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:48.842 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.842 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:49.103 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:49.103 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:49.363 10:27:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:49.363 10:27:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:50.306 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:50.306 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:50.568 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.568 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:50.568 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.568 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:50.568 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.568 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:50.830 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.830 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:50.830 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.830 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:50.830 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.830 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.092 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.092 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:51.092 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.092 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:51.092 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.092 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.353 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.353 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:51.353 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.353 10:27:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:51.614 10:27:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.614 10:27:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:51.614 10:27:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:51.614 10:27:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:51.875 10:27:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:52.816 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:52.816 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:52.816 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.816 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:53.077 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.339 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.339 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:53.339 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.339 10:27:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:53.600 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.600 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:53.600 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.600 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:53.601 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.601 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:53.601 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.601 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:53.862 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.862 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:53.862 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:54.123 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:54.123 10:27:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:55.506 10:27:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:55.506 10:27:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:55.506 10:27:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.506 10:27:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.506 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:55.767 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.767 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:55.768 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.768 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:55.768 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.768 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:55.768 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.768 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:56.029 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.029 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:56.029 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.029 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:56.291 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.291 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:56.291 10:27:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:56.291 10:27:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:56.553 10:27:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:57.499 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:57.499 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:57.499 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.499 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.759 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.759 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:57.759 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.759 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.020 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:58.281 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.281 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:58.281 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.281 10:27:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3033420 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 3033420 ']' 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 3033420 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3033420 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3033420' 00:33:58.574 killing process with pid 3033420 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 3033420 00:33:58.574 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 3033420 00:33:58.851 Connection closed with partial response: 00:33:58.851 00:33:58.851 00:33:58.851 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3033420 00:33:58.851 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:58.851 [2024-05-15 10:27:17.190627] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:33:58.851 [2024-05-15 10:27:17.190683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033420 ] 00:33:58.851 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.851 [2024-05-15 10:27:17.241376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.851 [2024-05-15 10:27:17.269216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:58.851 Running I/O for 90 seconds... 00:33:58.851 [2024-05-15 10:27:30.017653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.851 [2024-05-15 10:27:30.017803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.017956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.017962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.851 [2024-05-15 10:27:30.018118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:58.851 [2024-05-15 10:27:30.018129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.018395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.018407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.018411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.852 [2024-05-15 10:27:30.019705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.019724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.019742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.019760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.019778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:58.852 [2024-05-15 10:27:30.019791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.852 [2024-05-15 10:27:30.019796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.019993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.019998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.853 [2024-05-15 10:27:30.020620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:58.853 [2024-05-15 10:27:30.020636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:30.020931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:30.020947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.854 [2024-05-15 10:27:30.020952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.854 [2024-05-15 10:27:42.216392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.854 [2024-05-15 10:27:42.216408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.854 [2024-05-15 10:27:42.216945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.854 [2024-05-15 10:27:42.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.216988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.216993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.854 [2024-05-15 10:27:42.217058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:58.854 [2024-05-15 10:27:42.217673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.854 [2024-05-15 10:27:42.217677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:58.855 [2024-05-15 10:27:42.217693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.855 [2024-05-15 10:27:42.217698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:58.855 [2024-05-15 10:27:42.217708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.855 [2024-05-15 10:27:42.217714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:58.855 [2024-05-15 10:27:42.217724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.855 [2024-05-15 10:27:42.217729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:58.855 [2024-05-15 10:27:42.217739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:58.855 [2024-05-15 10:27:42.217744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:58.855 Received shutdown signal, test time was about 25.835392 seconds 00:33:58.855 00:33:58.855 Latency(us) 00:33:58.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.855 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:58.855 Verification LBA range: start 0x0 length 0x4000 00:33:58.855 Nvme0n1 : 25.83 11187.01 43.70 0.00 0.00 11421.07 477.87 3019898.88 00:33:58.855 =================================================================================================================== 00:33:58.855 Total : 11187.01 43.70 0.00 0.00 11421.07 477.87 3019898.88 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:58.855 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:58.855 rmmod nvme_tcp 00:33:58.855 rmmod nvme_fabrics 00:33:59.116 rmmod nvme_keyring 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3033054 ']' 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3033054 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 3033054 ']' 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 3033054 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3033054 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3033054' 00:33:59.116 killing process with pid 3033054 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 3033054 00:33:59.116 [2024-05-15 10:27:44.752149] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 3033054 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:59.116 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:59.117 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:59.117 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:59.117 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.117 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.117 10:27:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.667 10:27:46 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:01.667 00:34:01.667 real 0m39.130s 00:34:01.667 user 1m40.299s 00:34:01.667 sys 0m10.944s 00:34:01.667 10:27:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:01.667 10:27:46 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:01.667 ************************************ 00:34:01.667 END TEST nvmf_host_multipath_status 00:34:01.667 ************************************ 00:34:01.667 10:27:46 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:01.667 10:27:46 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:01.667 10:27:46 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:01.667 10:27:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.667 ************************************ 00:34:01.667 START TEST nvmf_discovery_remove_ifc 00:34:01.667 ************************************ 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:01.667 * Looking for test storage... 00:34:01.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:01.667 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:01.668 10:27:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.273 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.273 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:08.274 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:08.274 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:08.274 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:08.536 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:08.536 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:08.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:08.536 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:08.536 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:08.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:34:08.799 00:34:08.799 --- 10.0.0.2 ping statistics --- 00:34:08.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.799 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:34:08.799 00:34:08.799 --- 10.0.0.1 ping statistics --- 00:34:08.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.799 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3042952 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3042952 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 3042952 ']' 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:08.799 10:27:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.799 [2024-05-15 10:27:54.502760] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:08.799 [2024-05-15 10:27:54.502826] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.799 EAL: No free 2048 kB hugepages reported on node 1 00:34:08.799 [2024-05-15 10:27:54.590874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.062 [2024-05-15 10:27:54.637182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.062 [2024-05-15 10:27:54.637239] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.062 [2024-05-15 10:27:54.637247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.062 [2024-05-15 10:27:54.637254] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.062 [2024-05-15 10:27:54.637260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.062 [2024-05-15 10:27:54.637288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.639 [2024-05-15 10:27:55.352492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.639 [2024-05-15 10:27:55.360453] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:09.639 [2024-05-15 10:27:55.360731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:09.639 null0 00:34:09.639 [2024-05-15 10:27:55.392671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3043153 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3043153 /tmp/host.sock 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 3043153 ']' 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:09.639 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:09.639 10:27:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.900 [2024-05-15 10:27:55.466346] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:34:09.900 [2024-05-15 10:27:55.466410] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043153 ] 00:34:09.900 EAL: No free 2048 kB hugepages reported on node 1 00:34:09.900 [2024-05-15 10:27:55.530687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.900 [2024-05-15 10:27:55.569804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.472 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.473 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.734 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:10.734 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:10.735 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:10.735 10:27:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.680 [2024-05-15 10:27:57.363621] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:11.680 [2024-05-15 10:27:57.363650] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:11.680 [2024-05-15 10:27:57.363665] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:11.943 [2024-05-15 10:27:57.495061] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:11.943 [2024-05-15 10:27:57.675097] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:11.943 [2024-05-15 10:27:57.675146] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:11.943 [2024-05-15 10:27:57.675168] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:11.943 [2024-05-15 10:27:57.675182] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:11.943 [2024-05-15 10:27:57.675201] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.943 [2024-05-15 10:27:57.681994] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1621330 was disconnected and freed. delete nvme_qpair. 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:11.943 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:12.205 10:27:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.152 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.415 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:13.415 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:13.415 10:27:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.361 10:27:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.361 10:28:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:14.361 10:28:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:14.361 10:28:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:15.306 10:28:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:16.696 10:28:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:17.641 [2024-05-15 10:28:03.115614] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:17.641 [2024-05-15 10:28:03.115661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.641 [2024-05-15 10:28:03.115673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.641 [2024-05-15 10:28:03.115684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.641 [2024-05-15 10:28:03.115691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.641 [2024-05-15 10:28:03.115699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.641 [2024-05-15 10:28:03.115706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.641 [2024-05-15 10:28:03.115714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.641 [2024-05-15 10:28:03.115721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.642 [2024-05-15 10:28:03.115729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:17.642 [2024-05-15 10:28:03.115736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:17.642 [2024-05-15 10:28:03.115743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e84d0 is same with the state(5) to be set 00:34:17.642 [2024-05-15 10:28:03.125633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e84d0 (9): Bad file descriptor 00:34:17.642 [2024-05-15 10:28:03.135674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.642 10:28:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.587 [2024-05-15 10:28:04.155334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:19.530 [2024-05-15 10:28:05.179319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:19.530 [2024-05-15 10:28:05.179365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e84d0 with addr=10.0.0.2, port=4420 00:34:19.530 [2024-05-15 10:28:05.179379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e84d0 is same with the state(5) to be set 00:34:19.530 [2024-05-15 10:28:05.179762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e84d0 (9): Bad file descriptor 00:34:19.530 [2024-05-15 10:28:05.179785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:19.530 [2024-05-15 10:28:05.179804] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:19.530 [2024-05-15 10:28:05.179828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.530 [2024-05-15 10:28:05.179838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.530 [2024-05-15 10:28:05.179849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.530 [2024-05-15 10:28:05.179857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.530 [2024-05-15 10:28:05.179865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.530 [2024-05-15 10:28:05.179872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.530 [2024-05-15 10:28:05.179880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.530 [2024-05-15 10:28:05.179887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.530 [2024-05-15 10:28:05.179894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.530 [2024-05-15 10:28:05.179902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.530 [2024-05-15 10:28:05.179909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:19.530 [2024-05-15 10:28:05.180408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e7960 (9): Bad file descriptor 00:34:19.531 [2024-05-15 10:28:05.181420] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:19.531 [2024-05-15 10:28:05.181431] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:19.531 10:28:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:19.531 10:28:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:19.531 10:28:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:20.475 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:20.737 10:28:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:21.720 [2024-05-15 10:28:07.197316] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:21.720 [2024-05-15 10:28:07.197338] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:21.720 [2024-05-15 10:28:07.197352] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:21.720 [2024-05-15 10:28:07.286643] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:21.720 [2024-05-15 10:28:07.388958] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:21.720 [2024-05-15 10:28:07.389000] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:21.720 [2024-05-15 10:28:07.389021] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:21.720 [2024-05-15 10:28:07.389036] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:21.720 [2024-05-15 10:28:07.389043] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:21.720 [2024-05-15 10:28:07.396182] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15f53a0 was disconnected and freed. delete nvme_qpair. 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3043153 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 3043153 ']' 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 3043153 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:21.720 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3043153 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3043153' 00:34:21.982 killing process with pid 3043153 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 3043153 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 3043153 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:21.982 rmmod nvme_tcp 00:34:21.982 rmmod nvme_fabrics 00:34:21.982 rmmod nvme_keyring 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3042952 ']' 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3042952 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 3042952 ']' 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 3042952 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:34:21.982 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3042952 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3042952' 00:34:22.245 killing process with pid 3042952 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 3042952 00:34:22.245 [2024-05-15 10:28:07.787943] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 3042952 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.245 10:28:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.796 10:28:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:24.796 00:34:24.796 real 0m22.922s 00:34:24.796 user 0m26.016s 00:34:24.796 sys 0m6.726s 00:34:24.796 10:28:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:24.796 10:28:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.796 ************************************ 00:34:24.796 END TEST nvmf_discovery_remove_ifc 00:34:24.796 ************************************ 00:34:24.796 10:28:10 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:24.796 10:28:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:24.796 10:28:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:24.796 10:28:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:24.796 ************************************ 00:34:24.796 START TEST nvmf_identify_kernel_target 00:34:24.796 ************************************ 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:24.796 * Looking for test storage... 00:34:24.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:34:24.796 10:28:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:31.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.391 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:31.392 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:31.392 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:31.392 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:31.392 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:31.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:31.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:34:31.655 00:34:31.655 --- 10.0.0.2 ping statistics --- 00:34:31.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.655 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:31.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:31.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:34:31.655 00:34:31.655 --- 10.0.0.1 ping statistics --- 00:34:31.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:31.655 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:31.655 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:31.918 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:31.918 10:28:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:35.230 Waiting for block devices as requested 00:34:35.230 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:35.230 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:35.230 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:35.230 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:35.230 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:35.230 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:35.491 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:35.491 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:35.491 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:35.752 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:35.752 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:36.014 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:36.014 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:36.014 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:36.014 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:36.298 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:36.298 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:34:36.559 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:36.560 No valid GPT data, bailing 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:36.560 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:34:36.823 00:34:36.823 Discovery Log Number of Records 2, Generation counter 2 00:34:36.823 =====Discovery Log Entry 0====== 00:34:36.823 trtype: tcp 00:34:36.823 adrfam: ipv4 00:34:36.823 subtype: current discovery subsystem 00:34:36.823 treq: not specified, sq flow control disable supported 00:34:36.823 portid: 1 00:34:36.823 trsvcid: 4420 00:34:36.823 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:36.823 traddr: 10.0.0.1 00:34:36.823 eflags: none 00:34:36.823 sectype: none 00:34:36.823 =====Discovery Log Entry 1====== 00:34:36.823 trtype: tcp 00:34:36.823 adrfam: ipv4 00:34:36.823 subtype: nvme subsystem 00:34:36.823 treq: not specified, sq flow control disable supported 00:34:36.823 portid: 1 00:34:36.823 trsvcid: 4420 00:34:36.823 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:36.823 traddr: 10.0.0.1 00:34:36.823 eflags: none 00:34:36.823 sectype: none 00:34:36.823 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:36.823 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:36.823 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.823 ===================================================== 00:34:36.823 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:36.823 ===================================================== 00:34:36.823 Controller Capabilities/Features 00:34:36.823 ================================ 00:34:36.823 Vendor ID: 0000 00:34:36.823 Subsystem Vendor ID: 0000 00:34:36.823 Serial Number: 5e37e03ce6ec2d7bcc79 00:34:36.823 Model Number: Linux 00:34:36.823 Firmware Version: 6.7.0-68 00:34:36.823 Recommended Arb Burst: 0 00:34:36.823 IEEE OUI Identifier: 00 00 00 00:34:36.823 Multi-path I/O 00:34:36.823 May have multiple subsystem ports: No 00:34:36.823 May have multiple controllers: No 00:34:36.823 Associated with SR-IOV VF: No 00:34:36.823 Max Data Transfer Size: Unlimited 00:34:36.823 Max Number of Namespaces: 0 00:34:36.823 Max Number of I/O Queues: 1024 00:34:36.823 NVMe Specification Version (VS): 1.3 00:34:36.823 NVMe Specification Version (Identify): 1.3 00:34:36.823 Maximum Queue Entries: 1024 00:34:36.823 Contiguous Queues Required: No 00:34:36.823 Arbitration Mechanisms Supported 00:34:36.823 Weighted Round Robin: Not Supported 00:34:36.823 Vendor Specific: Not Supported 00:34:36.823 Reset Timeout: 7500 ms 00:34:36.823 Doorbell Stride: 4 bytes 00:34:36.823 NVM Subsystem Reset: Not Supported 00:34:36.823 Command Sets Supported 00:34:36.823 NVM Command Set: Supported 00:34:36.823 Boot Partition: Not Supported 00:34:36.823 Memory Page Size Minimum: 4096 bytes 00:34:36.823 Memory Page Size Maximum: 4096 bytes 00:34:36.823 Persistent Memory Region: Not Supported 00:34:36.823 Optional Asynchronous Events Supported 00:34:36.823 Namespace Attribute Notices: Not Supported 00:34:36.823 Firmware Activation Notices: Not Supported 00:34:36.823 ANA Change Notices: Not Supported 00:34:36.823 PLE Aggregate Log Change Notices: Not Supported 00:34:36.823 LBA Status Info Alert Notices: Not Supported 00:34:36.823 EGE Aggregate Log Change Notices: Not Supported 00:34:36.823 Normal NVM Subsystem Shutdown event: Not Supported 00:34:36.823 Zone Descriptor Change Notices: Not Supported 00:34:36.823 Discovery Log Change Notices: Supported 00:34:36.823 Controller Attributes 00:34:36.823 128-bit Host Identifier: Not Supported 00:34:36.823 Non-Operational Permissive Mode: Not Supported 00:34:36.823 NVM Sets: Not Supported 00:34:36.823 Read Recovery Levels: Not Supported 00:34:36.823 Endurance Groups: Not Supported 00:34:36.823 Predictable Latency Mode: Not Supported 00:34:36.823 Traffic Based Keep ALive: Not Supported 00:34:36.823 Namespace Granularity: Not Supported 00:34:36.823 SQ Associations: Not Supported 00:34:36.823 UUID List: Not Supported 00:34:36.823 Multi-Domain Subsystem: Not Supported 00:34:36.823 Fixed Capacity Management: Not Supported 00:34:36.823 Variable Capacity Management: Not Supported 00:34:36.823 Delete Endurance Group: Not Supported 00:34:36.823 Delete NVM Set: Not Supported 00:34:36.823 Extended LBA Formats Supported: Not Supported 00:34:36.823 Flexible Data Placement Supported: Not Supported 00:34:36.823 00:34:36.823 Controller Memory Buffer Support 00:34:36.823 ================================ 00:34:36.823 Supported: No 00:34:36.823 00:34:36.823 Persistent Memory Region Support 00:34:36.823 ================================ 00:34:36.823 Supported: No 00:34:36.823 00:34:36.823 Admin Command Set Attributes 00:34:36.823 ============================ 00:34:36.823 Security Send/Receive: Not Supported 00:34:36.823 Format NVM: Not Supported 00:34:36.823 Firmware Activate/Download: Not Supported 00:34:36.823 Namespace Management: Not Supported 00:34:36.823 Device Self-Test: Not Supported 00:34:36.823 Directives: Not Supported 00:34:36.823 NVMe-MI: Not Supported 00:34:36.823 Virtualization Management: Not Supported 00:34:36.823 Doorbell Buffer Config: Not Supported 00:34:36.823 Get LBA Status Capability: Not Supported 00:34:36.823 Command & Feature Lockdown Capability: Not Supported 00:34:36.823 Abort Command Limit: 1 00:34:36.823 Async Event Request Limit: 1 00:34:36.823 Number of Firmware Slots: N/A 00:34:36.824 Firmware Slot 1 Read-Only: N/A 00:34:36.824 Firmware Activation Without Reset: N/A 00:34:36.824 Multiple Update Detection Support: N/A 00:34:36.824 Firmware Update Granularity: No Information Provided 00:34:36.824 Per-Namespace SMART Log: No 00:34:36.824 Asymmetric Namespace Access Log Page: Not Supported 00:34:36.824 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:36.824 Command Effects Log Page: Not Supported 00:34:36.824 Get Log Page Extended Data: Supported 00:34:36.824 Telemetry Log Pages: Not Supported 00:34:36.824 Persistent Event Log Pages: Not Supported 00:34:36.824 Supported Log Pages Log Page: May Support 00:34:36.824 Commands Supported & Effects Log Page: Not Supported 00:34:36.824 Feature Identifiers & Effects Log Page:May Support 00:34:36.824 NVMe-MI Commands & Effects Log Page: May Support 00:34:36.824 Data Area 4 for Telemetry Log: Not Supported 00:34:36.824 Error Log Page Entries Supported: 1 00:34:36.824 Keep Alive: Not Supported 00:34:36.824 00:34:36.824 NVM Command Set Attributes 00:34:36.824 ========================== 00:34:36.824 Submission Queue Entry Size 00:34:36.824 Max: 1 00:34:36.824 Min: 1 00:34:36.824 Completion Queue Entry Size 00:34:36.824 Max: 1 00:34:36.824 Min: 1 00:34:36.824 Number of Namespaces: 0 00:34:36.824 Compare Command: Not Supported 00:34:36.824 Write Uncorrectable Command: Not Supported 00:34:36.824 Dataset Management Command: Not Supported 00:34:36.824 Write Zeroes Command: Not Supported 00:34:36.824 Set Features Save Field: Not Supported 00:34:36.824 Reservations: Not Supported 00:34:36.824 Timestamp: Not Supported 00:34:36.824 Copy: Not Supported 00:34:36.824 Volatile Write Cache: Not Present 00:34:36.824 Atomic Write Unit (Normal): 1 00:34:36.824 Atomic Write Unit (PFail): 1 00:34:36.824 Atomic Compare & Write Unit: 1 00:34:36.824 Fused Compare & Write: Not Supported 00:34:36.824 Scatter-Gather List 00:34:36.824 SGL Command Set: Supported 00:34:36.824 SGL Keyed: Not Supported 00:34:36.824 SGL Bit Bucket Descriptor: Not Supported 00:34:36.824 SGL Metadata Pointer: Not Supported 00:34:36.824 Oversized SGL: Not Supported 00:34:36.824 SGL Metadata Address: Not Supported 00:34:36.824 SGL Offset: Supported 00:34:36.824 Transport SGL Data Block: Not Supported 00:34:36.824 Replay Protected Memory Block: Not Supported 00:34:36.824 00:34:36.824 Firmware Slot Information 00:34:36.824 ========================= 00:34:36.824 Active slot: 0 00:34:36.824 00:34:36.824 00:34:36.824 Error Log 00:34:36.824 ========= 00:34:36.824 00:34:36.824 Active Namespaces 00:34:36.824 ================= 00:34:36.824 Discovery Log Page 00:34:36.824 ================== 00:34:36.824 Generation Counter: 2 00:34:36.824 Number of Records: 2 00:34:36.824 Record Format: 0 00:34:36.824 00:34:36.824 Discovery Log Entry 0 00:34:36.824 ---------------------- 00:34:36.824 Transport Type: 3 (TCP) 00:34:36.824 Address Family: 1 (IPv4) 00:34:36.824 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:36.824 Entry Flags: 00:34:36.824 Duplicate Returned Information: 0 00:34:36.824 Explicit Persistent Connection Support for Discovery: 0 00:34:36.824 Transport Requirements: 00:34:36.824 Secure Channel: Not Specified 00:34:36.824 Port ID: 1 (0x0001) 00:34:36.824 Controller ID: 65535 (0xffff) 00:34:36.824 Admin Max SQ Size: 32 00:34:36.824 Transport Service Identifier: 4420 00:34:36.824 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:36.824 Transport Address: 10.0.0.1 00:34:36.824 Discovery Log Entry 1 00:34:36.824 ---------------------- 00:34:36.824 Transport Type: 3 (TCP) 00:34:36.824 Address Family: 1 (IPv4) 00:34:36.824 Subsystem Type: 2 (NVM Subsystem) 00:34:36.824 Entry Flags: 00:34:36.824 Duplicate Returned Information: 0 00:34:36.824 Explicit Persistent Connection Support for Discovery: 0 00:34:36.824 Transport Requirements: 00:34:36.824 Secure Channel: Not Specified 00:34:36.824 Port ID: 1 (0x0001) 00:34:36.824 Controller ID: 65535 (0xffff) 00:34:36.824 Admin Max SQ Size: 32 00:34:36.824 Transport Service Identifier: 4420 00:34:36.824 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:36.824 Transport Address: 10.0.0.1 00:34:36.824 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.824 EAL: No free 2048 kB hugepages reported on node 1 00:34:36.824 get_feature(0x01) failed 00:34:36.824 get_feature(0x02) failed 00:34:36.824 get_feature(0x04) failed 00:34:36.824 ===================================================== 00:34:36.824 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.824 ===================================================== 00:34:36.824 Controller Capabilities/Features 00:34:36.824 ================================ 00:34:36.824 Vendor ID: 0000 00:34:36.824 Subsystem Vendor ID: 0000 00:34:36.824 Serial Number: 93752d39c95b4e8ce3f7 00:34:36.824 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:36.824 Firmware Version: 6.7.0-68 00:34:36.824 Recommended Arb Burst: 6 00:34:36.824 IEEE OUI Identifier: 00 00 00 00:34:36.824 Multi-path I/O 00:34:36.824 May have multiple subsystem ports: Yes 00:34:36.824 May have multiple controllers: Yes 00:34:36.824 Associated with SR-IOV VF: No 00:34:36.824 Max Data Transfer Size: Unlimited 00:34:36.824 Max Number of Namespaces: 1024 00:34:36.824 Max Number of I/O Queues: 128 00:34:36.824 NVMe Specification Version (VS): 1.3 00:34:36.824 NVMe Specification Version (Identify): 1.3 00:34:36.824 Maximum Queue Entries: 1024 00:34:36.824 Contiguous Queues Required: No 00:34:36.824 Arbitration Mechanisms Supported 00:34:36.824 Weighted Round Robin: Not Supported 00:34:36.824 Vendor Specific: Not Supported 00:34:36.824 Reset Timeout: 7500 ms 00:34:36.824 Doorbell Stride: 4 bytes 00:34:36.824 NVM Subsystem Reset: Not Supported 00:34:36.824 Command Sets Supported 00:34:36.824 NVM Command Set: Supported 00:34:36.824 Boot Partition: Not Supported 00:34:36.824 Memory Page Size Minimum: 4096 bytes 00:34:36.824 Memory Page Size Maximum: 4096 bytes 00:34:36.824 Persistent Memory Region: Not Supported 00:34:36.824 Optional Asynchronous Events Supported 00:34:36.824 Namespace Attribute Notices: Supported 00:34:36.824 Firmware Activation Notices: Not Supported 00:34:36.824 ANA Change Notices: Supported 00:34:36.824 PLE Aggregate Log Change Notices: Not Supported 00:34:36.824 LBA Status Info Alert Notices: Not Supported 00:34:36.824 EGE Aggregate Log Change Notices: Not Supported 00:34:36.824 Normal NVM Subsystem Shutdown event: Not Supported 00:34:36.824 Zone Descriptor Change Notices: Not Supported 00:34:36.824 Discovery Log Change Notices: Not Supported 00:34:36.824 Controller Attributes 00:34:36.824 128-bit Host Identifier: Supported 00:34:36.824 Non-Operational Permissive Mode: Not Supported 00:34:36.824 NVM Sets: Not Supported 00:34:36.824 Read Recovery Levels: Not Supported 00:34:36.824 Endurance Groups: Not Supported 00:34:36.824 Predictable Latency Mode: Not Supported 00:34:36.824 Traffic Based Keep ALive: Supported 00:34:36.824 Namespace Granularity: Not Supported 00:34:36.824 SQ Associations: Not Supported 00:34:36.824 UUID List: Not Supported 00:34:36.824 Multi-Domain Subsystem: Not Supported 00:34:36.824 Fixed Capacity Management: Not Supported 00:34:36.824 Variable Capacity Management: Not Supported 00:34:36.824 Delete Endurance Group: Not Supported 00:34:36.824 Delete NVM Set: Not Supported 00:34:36.824 Extended LBA Formats Supported: Not Supported 00:34:36.824 Flexible Data Placement Supported: Not Supported 00:34:36.824 00:34:36.824 Controller Memory Buffer Support 00:34:36.824 ================================ 00:34:36.824 Supported: No 00:34:36.824 00:34:36.824 Persistent Memory Region Support 00:34:36.824 ================================ 00:34:36.824 Supported: No 00:34:36.824 00:34:36.824 Admin Command Set Attributes 00:34:36.824 ============================ 00:34:36.824 Security Send/Receive: Not Supported 00:34:36.824 Format NVM: Not Supported 00:34:36.824 Firmware Activate/Download: Not Supported 00:34:36.824 Namespace Management: Not Supported 00:34:36.824 Device Self-Test: Not Supported 00:34:36.824 Directives: Not Supported 00:34:36.824 NVMe-MI: Not Supported 00:34:36.824 Virtualization Management: Not Supported 00:34:36.824 Doorbell Buffer Config: Not Supported 00:34:36.824 Get LBA Status Capability: Not Supported 00:34:36.824 Command & Feature Lockdown Capability: Not Supported 00:34:36.824 Abort Command Limit: 4 00:34:36.824 Async Event Request Limit: 4 00:34:36.824 Number of Firmware Slots: N/A 00:34:36.824 Firmware Slot 1 Read-Only: N/A 00:34:36.824 Firmware Activation Without Reset: N/A 00:34:36.824 Multiple Update Detection Support: N/A 00:34:36.824 Firmware Update Granularity: No Information Provided 00:34:36.824 Per-Namespace SMART Log: Yes 00:34:36.824 Asymmetric Namespace Access Log Page: Supported 00:34:36.824 ANA Transition Time : 10 sec 00:34:36.824 00:34:36.825 Asymmetric Namespace Access Capabilities 00:34:36.825 ANA Optimized State : Supported 00:34:36.825 ANA Non-Optimized State : Supported 00:34:36.825 ANA Inaccessible State : Supported 00:34:36.825 ANA Persistent Loss State : Supported 00:34:36.825 ANA Change State : Supported 00:34:36.825 ANAGRPID is not changed : No 00:34:36.825 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:36.825 00:34:36.825 ANA Group Identifier Maximum : 128 00:34:36.825 Number of ANA Group Identifiers : 128 00:34:36.825 Max Number of Allowed Namespaces : 1024 00:34:36.825 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:36.825 Command Effects Log Page: Supported 00:34:36.825 Get Log Page Extended Data: Supported 00:34:36.825 Telemetry Log Pages: Not Supported 00:34:36.825 Persistent Event Log Pages: Not Supported 00:34:36.825 Supported Log Pages Log Page: May Support 00:34:36.825 Commands Supported & Effects Log Page: Not Supported 00:34:36.825 Feature Identifiers & Effects Log Page:May Support 00:34:36.825 NVMe-MI Commands & Effects Log Page: May Support 00:34:36.825 Data Area 4 for Telemetry Log: Not Supported 00:34:36.825 Error Log Page Entries Supported: 128 00:34:36.825 Keep Alive: Supported 00:34:36.825 Keep Alive Granularity: 1000 ms 00:34:36.825 00:34:36.825 NVM Command Set Attributes 00:34:36.825 ========================== 00:34:36.825 Submission Queue Entry Size 00:34:36.825 Max: 64 00:34:36.825 Min: 64 00:34:36.825 Completion Queue Entry Size 00:34:36.825 Max: 16 00:34:36.825 Min: 16 00:34:36.825 Number of Namespaces: 1024 00:34:36.825 Compare Command: Not Supported 00:34:36.825 Write Uncorrectable Command: Not Supported 00:34:36.825 Dataset Management Command: Supported 00:34:36.825 Write Zeroes Command: Supported 00:34:36.825 Set Features Save Field: Not Supported 00:34:36.825 Reservations: Not Supported 00:34:36.825 Timestamp: Not Supported 00:34:36.825 Copy: Not Supported 00:34:36.825 Volatile Write Cache: Present 00:34:36.825 Atomic Write Unit (Normal): 1 00:34:36.825 Atomic Write Unit (PFail): 1 00:34:36.825 Atomic Compare & Write Unit: 1 00:34:36.825 Fused Compare & Write: Not Supported 00:34:36.825 Scatter-Gather List 00:34:36.825 SGL Command Set: Supported 00:34:36.825 SGL Keyed: Not Supported 00:34:36.825 SGL Bit Bucket Descriptor: Not Supported 00:34:36.825 SGL Metadata Pointer: Not Supported 00:34:36.825 Oversized SGL: Not Supported 00:34:36.825 SGL Metadata Address: Not Supported 00:34:36.825 SGL Offset: Supported 00:34:36.825 Transport SGL Data Block: Not Supported 00:34:36.825 Replay Protected Memory Block: Not Supported 00:34:36.825 00:34:36.825 Firmware Slot Information 00:34:36.825 ========================= 00:34:36.825 Active slot: 0 00:34:36.825 00:34:36.825 Asymmetric Namespace Access 00:34:36.825 =========================== 00:34:36.825 Change Count : 0 00:34:36.825 Number of ANA Group Descriptors : 1 00:34:36.825 ANA Group Descriptor : 0 00:34:36.825 ANA Group ID : 1 00:34:36.825 Number of NSID Values : 1 00:34:36.825 Change Count : 0 00:34:36.825 ANA State : 1 00:34:36.825 Namespace Identifier : 1 00:34:36.825 00:34:36.825 Commands Supported and Effects 00:34:36.825 ============================== 00:34:36.825 Admin Commands 00:34:36.825 -------------- 00:34:36.825 Get Log Page (02h): Supported 00:34:36.825 Identify (06h): Supported 00:34:36.825 Abort (08h): Supported 00:34:36.825 Set Features (09h): Supported 00:34:36.825 Get Features (0Ah): Supported 00:34:36.825 Asynchronous Event Request (0Ch): Supported 00:34:36.825 Keep Alive (18h): Supported 00:34:36.825 I/O Commands 00:34:36.825 ------------ 00:34:36.825 Flush (00h): Supported 00:34:36.825 Write (01h): Supported LBA-Change 00:34:36.825 Read (02h): Supported 00:34:36.825 Write Zeroes (08h): Supported LBA-Change 00:34:36.825 Dataset Management (09h): Supported 00:34:36.825 00:34:36.825 Error Log 00:34:36.825 ========= 00:34:36.825 Entry: 0 00:34:36.825 Error Count: 0x3 00:34:36.825 Submission Queue Id: 0x0 00:34:36.825 Command Id: 0x5 00:34:36.825 Phase Bit: 0 00:34:36.825 Status Code: 0x2 00:34:36.825 Status Code Type: 0x0 00:34:36.825 Do Not Retry: 1 00:34:36.825 Error Location: 0x28 00:34:36.825 LBA: 0x0 00:34:36.825 Namespace: 0x0 00:34:36.825 Vendor Log Page: 0x0 00:34:36.825 ----------- 00:34:36.825 Entry: 1 00:34:36.825 Error Count: 0x2 00:34:36.825 Submission Queue Id: 0x0 00:34:36.825 Command Id: 0x5 00:34:36.825 Phase Bit: 0 00:34:36.825 Status Code: 0x2 00:34:36.825 Status Code Type: 0x0 00:34:36.825 Do Not Retry: 1 00:34:36.825 Error Location: 0x28 00:34:36.825 LBA: 0x0 00:34:36.825 Namespace: 0x0 00:34:36.825 Vendor Log Page: 0x0 00:34:36.825 ----------- 00:34:36.825 Entry: 2 00:34:36.825 Error Count: 0x1 00:34:36.825 Submission Queue Id: 0x0 00:34:36.825 Command Id: 0x4 00:34:36.825 Phase Bit: 0 00:34:36.825 Status Code: 0x2 00:34:36.825 Status Code Type: 0x0 00:34:36.825 Do Not Retry: 1 00:34:36.825 Error Location: 0x28 00:34:36.825 LBA: 0x0 00:34:36.825 Namespace: 0x0 00:34:36.825 Vendor Log Page: 0x0 00:34:36.825 00:34:36.825 Number of Queues 00:34:36.825 ================ 00:34:36.825 Number of I/O Submission Queues: 128 00:34:36.825 Number of I/O Completion Queues: 128 00:34:36.825 00:34:36.825 ZNS Specific Controller Data 00:34:36.825 ============================ 00:34:36.825 Zone Append Size Limit: 0 00:34:36.825 00:34:36.825 00:34:36.825 Active Namespaces 00:34:36.825 ================= 00:34:36.825 get_feature(0x05) failed 00:34:36.825 Namespace ID:1 00:34:36.825 Command Set Identifier: NVM (00h) 00:34:36.825 Deallocate: Supported 00:34:36.825 Deallocated/Unwritten Error: Not Supported 00:34:36.825 Deallocated Read Value: Unknown 00:34:36.825 Deallocate in Write Zeroes: Not Supported 00:34:36.825 Deallocated Guard Field: 0xFFFF 00:34:36.825 Flush: Supported 00:34:36.825 Reservation: Not Supported 00:34:36.825 Namespace Sharing Capabilities: Multiple Controllers 00:34:36.825 Size (in LBAs): 3750748848 (1788GiB) 00:34:36.825 Capacity (in LBAs): 3750748848 (1788GiB) 00:34:36.825 Utilization (in LBAs): 3750748848 (1788GiB) 00:34:36.825 UUID: f8292a72-4e86-411c-aa0e-e84d8e9af384 00:34:36.825 Thin Provisioning: Not Supported 00:34:36.825 Per-NS Atomic Units: Yes 00:34:36.825 Atomic Write Unit (Normal): 8 00:34:36.825 Atomic Write Unit (PFail): 8 00:34:36.825 Preferred Write Granularity: 8 00:34:36.825 Atomic Compare & Write Unit: 8 00:34:36.825 Atomic Boundary Size (Normal): 0 00:34:36.825 Atomic Boundary Size (PFail): 0 00:34:36.825 Atomic Boundary Offset: 0 00:34:36.825 NGUID/EUI64 Never Reused: No 00:34:36.825 ANA group ID: 1 00:34:36.825 Namespace Write Protected: No 00:34:36.825 Number of LBA Formats: 1 00:34:36.825 Current LBA Format: LBA Format #00 00:34:36.825 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:36.825 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:36.825 rmmod nvme_tcp 00:34:36.825 rmmod nvme_fabrics 00:34:36.825 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:37.087 10:28:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:39.001 10:28:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:42.308 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:42.308 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:42.570 00:34:42.570 real 0m18.054s 00:34:42.570 user 0m4.554s 00:34:42.570 sys 0m10.253s 00:34:42.570 10:28:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:34:42.570 10:28:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 ************************************ 00:34:42.570 END TEST nvmf_identify_kernel_target 00:34:42.570 ************************************ 00:34:42.570 10:28:28 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:42.570 10:28:28 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:34:42.570 10:28:28 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:34:42.570 10:28:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.570 ************************************ 00:34:42.570 START TEST nvmf_auth_host 00:34:42.570 ************************************ 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:42.570 * Looking for test storage... 00:34:42.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:42.570 10:28:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:42.571 10:28:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:50.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.717 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:50.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:50.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:50.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:50.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:34:50.718 00:34:50.718 --- 10.0.0.2 ping statistics --- 00:34:50.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.718 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:34:50.718 00:34:50.718 --- 10.0.0.1 ping statistics --- 00:34:50.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.718 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3056999 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3056999 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 3056999 ']' 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:50.718 10:28:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.011 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3f4a8efb1accd32c6b5ea20297fc0dc0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LAY 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3f4a8efb1accd32c6b5ea20297fc0dc0 0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3f4a8efb1accd32c6b5ea20297fc0dc0 0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3f4a8efb1accd32c6b5ea20297fc0dc0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LAY 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LAY 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LAY 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d054e5be566622f79072d79004d4390877da9514143cc165a87d116f759b5890 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Wwg 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d054e5be566622f79072d79004d4390877da9514143cc165a87d116f759b5890 3 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d054e5be566622f79072d79004d4390877da9514143cc165a87d116f759b5890 3 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d054e5be566622f79072d79004d4390877da9514143cc165a87d116f759b5890 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Wwg 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Wwg 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Wwg 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4466c1af9a7c99b962a20b855fb5855e6c61cdd6e21e0bb1 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.KEG 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4466c1af9a7c99b962a20b855fb5855e6c61cdd6e21e0bb1 0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4466c1af9a7c99b962a20b855fb5855e6c61cdd6e21e0bb1 0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4466c1af9a7c99b962a20b855fb5855e6c61cdd6e21e0bb1 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.KEG 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.KEG 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.KEG 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a41225ebf7a6616812103f38b62ec934a3faff99b39be42 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8tu 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a41225ebf7a6616812103f38b62ec934a3faff99b39be42 2 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a41225ebf7a6616812103f38b62ec934a3faff99b39be42 2 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a41225ebf7a6616812103f38b62ec934a3faff99b39be42 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8tu 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8tu 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.8tu 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1db3bfe093acf1ea65434bd1cede03c7 00:34:51.012 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pyd 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1db3bfe093acf1ea65434bd1cede03c7 1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1db3bfe093acf1ea65434bd1cede03c7 1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1db3bfe093acf1ea65434bd1cede03c7 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pyd 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pyd 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.pyd 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3600b1d255d1af892a28a5e7fd75f3a0 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KXg 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3600b1d255d1af892a28a5e7fd75f3a0 1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3600b1d255d1af892a28a5e7fd75f3a0 1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3600b1d255d1af892a28a5e7fd75f3a0 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KXg 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KXg 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.KXg 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1b3a1f1a89e6ca6e13febc44db834a0c9bd8a221956f26f7 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tBL 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1b3a1f1a89e6ca6e13febc44db834a0c9bd8a221956f26f7 2 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1b3a1f1a89e6ca6e13febc44db834a0c9bd8a221956f26f7 2 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1b3a1f1a89e6ca6e13febc44db834a0c9bd8a221956f26f7 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tBL 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tBL 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tBL 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=082431055c0a84ee36814e31b9736b80 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.j4v 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 082431055c0a84ee36814e31b9736b80 0 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 082431055c0a84ee36814e31b9736b80 0 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=082431055c0a84ee36814e31b9736b80 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:34:51.279 10:28:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.j4v 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.j4v 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.j4v 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eaf8d794ed2cd1052c2da818cd5c194087872671f94edb9d1676f4ace56266ea 00:34:51.279 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bvM 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eaf8d794ed2cd1052c2da818cd5c194087872671f94edb9d1676f4ace56266ea 3 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eaf8d794ed2cd1052c2da818cd5c194087872671f94edb9d1676f4ace56266ea 3 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eaf8d794ed2cd1052c2da818cd5c194087872671f94edb9d1676f4ace56266ea 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:34:51.280 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bvM 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bvM 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bvM 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3056999 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 3056999 ']' 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LAY 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.541 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Wwg ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Wwg 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.KEG 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.8tu ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8tu 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.pyd 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.KXg ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KXg 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tBL 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.j4v ]] 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.j4v 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.542 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.803 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.803 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:51.803 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bvM 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:51.804 10:28:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:55.108 Waiting for block devices as requested 00:34:55.108 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:55.108 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:55.108 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:55.108 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:55.369 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:55.369 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:55.369 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:55.369 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:55.630 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:55.630 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:55.890 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:55.890 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:55.890 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:56.151 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:56.151 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:56.151 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:56.151 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:34:57.094 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:57.095 No valid GPT data, bailing 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:57.095 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:34:57.357 00:34:57.357 Discovery Log Number of Records 2, Generation counter 2 00:34:57.357 =====Discovery Log Entry 0====== 00:34:57.357 trtype: tcp 00:34:57.357 adrfam: ipv4 00:34:57.357 subtype: current discovery subsystem 00:34:57.357 treq: not specified, sq flow control disable supported 00:34:57.357 portid: 1 00:34:57.357 trsvcid: 4420 00:34:57.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:57.357 traddr: 10.0.0.1 00:34:57.357 eflags: none 00:34:57.357 sectype: none 00:34:57.357 =====Discovery Log Entry 1====== 00:34:57.357 trtype: tcp 00:34:57.357 adrfam: ipv4 00:34:57.357 subtype: nvme subsystem 00:34:57.357 treq: not specified, sq flow control disable supported 00:34:57.357 portid: 1 00:34:57.357 trsvcid: 4420 00:34:57.357 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:57.357 traddr: 10.0.0.1 00:34:57.357 eflags: none 00:34:57.357 sectype: none 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.357 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.358 10:28:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.358 nvme0n1 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.358 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.618 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.619 nvme0n1 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.619 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.879 nvme0n1 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:57.879 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.140 nvme0n1 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.140 10:28:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.401 nvme0n1 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.401 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.661 nvme0n1 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:34:58.661 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.662 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.921 nvme0n1 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.921 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:58.922 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.182 nvme0n1 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.182 10:28:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.441 nvme0n1 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.441 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.702 nvme0n1 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.702 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.964 nvme0n1 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:59.964 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:34:59.965 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.227 nvme0n1 00:35:00.227 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.227 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.227 10:28:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.227 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.227 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.227 10:28:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.227 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.227 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.227 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.227 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.488 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.749 nvme0n1 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:00.749 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.750 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:00.750 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.010 nvme0n1 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.010 10:28:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.271 nvme0n1 00:35:01.271 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.271 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.271 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.271 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.271 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.271 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.531 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.532 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.793 nvme0n1 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:01.793 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.365 nvme0n1 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.365 10:28:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.365 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.939 nvme0n1 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:02.939 10:28:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.512 nvme0n1 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:03.512 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.083 nvme0n1 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.084 10:28:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.344 nvme0n1 00:35:04.344 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.344 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.345 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.345 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.345 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:04.606 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.178 nvme0n1 00:35:05.178 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:05.178 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.179 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.179 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:05.179 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.179 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:05.444 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.444 10:28:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.444 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:05.444 10:28:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.444 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:05.445 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.019 nvme0n1 00:35:06.019 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:06.019 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.019 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.019 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:06.019 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.019 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:06.280 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.280 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.280 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:06.281 10:28:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.852 nvme0n1 00:35:06.852 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:06.852 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.852 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.852 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:06.852 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.852 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.112 10:28:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.683 nvme0n1 00:35:07.683 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.683 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.683 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.683 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.683 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.683 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:07.944 10:28:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.518 nvme0n1 00:35:08.518 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.518 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.518 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.518 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.518 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.518 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.780 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.781 nvme0n1 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:08.781 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.044 nvme0n1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.044 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.306 nvme0n1 00:35:09.306 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.306 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.306 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.306 10:28:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.306 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.306 10:28:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.306 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.307 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.568 nvme0n1 00:35:09.568 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.568 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.568 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.568 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.568 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.568 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.569 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.830 nvme0n1 00:35:09.830 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.830 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.830 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.830 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.830 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.830 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:09.831 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.093 nvme0n1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.093 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.356 nvme0n1 00:35:10.356 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.356 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.356 10:28:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.356 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.356 10:28:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.356 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.619 nvme0n1 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.619 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.881 nvme0n1 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:10.881 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.144 nvme0n1 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.144 10:28:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.406 nvme0n1 00:35:11.406 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.406 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.406 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.406 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.406 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.406 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.668 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.669 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.931 nvme0n1 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:11.931 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.193 nvme0n1 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.193 10:28:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.455 10:28:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.455 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.455 10:28:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.455 nvme0n1 00:35:12.455 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.455 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.455 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.455 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.455 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.455 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.717 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.979 nvme0n1 00:35:12.979 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:12.980 10:28:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.555 nvme0n1 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.555 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:13.556 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.818 nvme0n1 00:35:13.818 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:13.818 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.818 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.818 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:13.818 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.818 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.080 10:28:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.341 nvme0n1 00:35:14.341 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.341 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:14.604 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.605 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.867 nvme0n1 00:35:14.867 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:14.867 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.867 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.867 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:14.867 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.129 10:29:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.704 nvme0n1 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:15.704 10:29:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.279 nvme0n1 00:35:16.279 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.279 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.279 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.279 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.279 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.279 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:16.541 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:16.542 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.116 nvme0n1 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.117 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.379 10:29:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.952 nvme0n1 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.952 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:17.953 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.215 10:29:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.216 10:29:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.216 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.216 10:29:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.791 nvme0n1 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:18.791 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.054 10:29:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.629 nvme0n1 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.629 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.892 nvme0n1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:19.892 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.186 nvme0n1 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.186 10:29:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.448 nvme0n1 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:20.448 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.449 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.711 nvme0n1 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.711 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.973 nvme0n1 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.973 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:20.974 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.236 nvme0n1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.236 10:29:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.498 nvme0n1 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.498 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.760 nvme0n1 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:21.760 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.022 nvme0n1 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.022 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.023 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.285 nvme0n1 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.285 10:29:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.548 nvme0n1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.548 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.810 nvme0n1 00:35:22.810 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:22.810 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.810 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:22.810 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.810 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.073 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.335 nvme0n1 00:35:23.335 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.335 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.335 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.335 10:29:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.335 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.335 10:29:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:23.335 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.336 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.598 nvme0n1 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.598 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.860 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:23.861 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:23.861 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.123 nvme0n1 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.123 10:29:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.699 nvme0n1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.700 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.963 nvme0n1 00:35:24.963 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:24.963 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.963 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:24.963 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.964 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.964 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.226 10:29:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.801 nvme0n1 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.801 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:25.802 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.064 nvme0n1 00:35:26.064 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.064 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.064 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.064 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.064 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.064 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.327 10:29:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.902 nvme0n1 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2Y0YThlZmIxYWNjZDMyYzZiNWVhMjAyOTdmYzBkYzA4WRIF: 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZDA1NGU1YmU1NjY2MjJmNzkwNzJkNzkwMDRkNDM5MDg3N2RhOTUxNDE0M2NjMTY1YTg3ZDExNmY3NTliNTg5ML6wfOI=: 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:26.902 10:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.476 nvme0n1 00:35:27.476 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:27.476 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.476 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:27.476 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.476 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.476 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:27.738 10:29:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.311 nvme0n1 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.311 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWRiM2JmZTA5M2FjZjFlYTY1NDM0YmQxY2VkZTAzYzdAQxCG: 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: ]] 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzYwMGIxZDI1NWQxYWY4OTJhMjhhNWU3ZmQ3NWYzYTCKgSpu: 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:28.574 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.148 nvme0n1 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.148 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MWIzYTFmMWE4OWU2Y2E2ZTEzZmViYzQ0ZGI4MzRhMGM5YmQ4YTIyMTk1NmYyNmY3XkpK0A==: 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: ]] 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MDgyNDMxMDU1YzBhODRlZTM2ODE0ZTMxYjk3MzZiODCaJ5OD: 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.410 10:29:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.984 nvme0n1 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:29.984 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWFmOGQ3OTRlZDJjZDEwNTJjMmRhODE4Y2Q1YzE5NDA4Nzg3MjY3MWY5NGVkYjlkMTY3NmY0YWNlNTYyNjZlYRdREQk=: 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:30.247 10:29:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.821 nvme0n1 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:30.821 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDQ2NmMxYWY5YTdjOTliOTYyYTIwYjg1NWZiNTg1NWU2YzYxY2RkNmUyMWUwYmIxslGyUQ==: 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGE0MTIyNWViZjdhNjYxNjgxMjEwM2YzOGI2MmVjOTM0YTNmYWZmOTliMzliZTQy1jnRDQ==: 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.082 request: 00:35:31.082 { 00:35:31.082 "name": "nvme0", 00:35:31.082 "trtype": "tcp", 00:35:31.082 "traddr": "10.0.0.1", 00:35:31.082 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:31.082 "adrfam": "ipv4", 00:35:31.082 "trsvcid": "4420", 00:35:31.082 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:31.082 "method": "bdev_nvme_attach_controller", 00:35:31.082 "req_id": 1 00:35:31.082 } 00:35:31.082 Got JSON-RPC error response 00:35:31.082 response: 00:35:31.082 { 00:35:31.082 "code": -32602, 00:35:31.082 "message": "Invalid parameters" 00:35:31.082 } 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.082 request: 00:35:31.082 { 00:35:31.082 "name": "nvme0", 00:35:31.082 "trtype": "tcp", 00:35:31.082 "traddr": "10.0.0.1", 00:35:31.082 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:31.082 "adrfam": "ipv4", 00:35:31.082 "trsvcid": "4420", 00:35:31.082 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:31.082 "dhchap_key": "key2", 00:35:31.082 "method": "bdev_nvme_attach_controller", 00:35:31.082 "req_id": 1 00:35:31.082 } 00:35:31.082 Got JSON-RPC error response 00:35:31.082 response: 00:35:31.082 { 00:35:31.082 "code": -32602, 00:35:31.082 "message": "Invalid parameters" 00:35:31.082 } 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.082 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:31.083 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.346 request: 00:35:31.346 { 00:35:31.346 "name": "nvme0", 00:35:31.346 "trtype": "tcp", 00:35:31.346 "traddr": "10.0.0.1", 00:35:31.346 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:31.346 "adrfam": "ipv4", 00:35:31.346 "trsvcid": "4420", 00:35:31.346 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:31.346 "dhchap_key": "key1", 00:35:31.346 "dhchap_ctrlr_key": "ckey2", 00:35:31.346 "method": "bdev_nvme_attach_controller", 00:35:31.346 "req_id": 1 00:35:31.346 } 00:35:31.346 Got JSON-RPC error response 00:35:31.346 response: 00:35:31.346 { 00:35:31.346 "code": -32602, 00:35:31.346 "message": "Invalid parameters" 00:35:31.346 } 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:31.346 rmmod nvme_tcp 00:35:31.346 rmmod nvme_fabrics 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3056999 ']' 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3056999 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 3056999 ']' 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 3056999 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:31.346 10:29:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3056999 00:35:31.346 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:35:31.346 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:35:31.346 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3056999' 00:35:31.346 killing process with pid 3056999 00:35:31.346 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 3056999 00:35:31.346 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 3056999 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:31.607 10:29:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:33.525 10:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.883 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:36.883 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:37.145 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:37.145 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:35:37.407 10:29:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LAY /tmp/spdk.key-null.KEG /tmp/spdk.key-sha256.pyd /tmp/spdk.key-sha384.tBL /tmp/spdk.key-sha512.bvM /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:37.407 10:29:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:39.961 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:39.961 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:39.961 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:40.533 00:35:40.533 real 0m57.852s 00:35:40.533 user 0m51.644s 00:35:40.533 sys 0m14.780s 00:35:40.533 10:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:35:40.533 10:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.533 ************************************ 00:35:40.533 END TEST nvmf_auth_host 00:35:40.533 ************************************ 00:35:40.533 10:29:26 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:35:40.533 10:29:26 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:40.533 10:29:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:35:40.533 10:29:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:40.533 10:29:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:40.533 ************************************ 00:35:40.533 START TEST nvmf_digest 00:35:40.533 ************************************ 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:40.533 * Looking for test storage... 00:35:40.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:40.533 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:35:40.534 10:29:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:47.131 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:47.131 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:47.131 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:47.131 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:47.132 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:47.132 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:47.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:47.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:35:47.395 00:35:47.395 --- 10.0.0.2 ping statistics --- 00:35:47.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.395 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:47.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:47.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:35:47.395 00:35:47.395 --- 10.0.0.1 ping statistics --- 00:35:47.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:47.395 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:35:47.395 10:29:32 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:47.395 ************************************ 00:35:47.395 START TEST nvmf_digest_clean 00:35:47.395 ************************************ 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3073096 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3073096 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 3073096 ']' 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:47.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:47.395 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:47.395 [2024-05-15 10:29:33.069450] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:47.395 [2024-05-15 10:29:33.069512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:47.395 EAL: No free 2048 kB hugepages reported on node 1 00:35:47.395 [2024-05-15 10:29:33.140214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.395 [2024-05-15 10:29:33.178230] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:47.395 [2024-05-15 10:29:33.178280] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:47.395 [2024-05-15 10:29:33.178288] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:47.395 [2024-05-15 10:29:33.178300] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:47.395 [2024-05-15 10:29:33.178306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:47.395 [2024-05-15 10:29:33.178332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:48.340 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.341 null0 00:35:48.341 [2024-05-15 10:29:33.939108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:48.341 [2024-05-15 10:29:33.963104] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:48.341 [2024-05-15 10:29:33.963335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3073429 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3073429 /var/tmp/bperf.sock 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 3073429 ']' 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.341 10:29:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:48.341 [2024-05-15 10:29:34.025682] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:48.341 [2024-05-15 10:29:34.025740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073429 ] 00:35:48.341 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.341 [2024-05-15 10:29:34.102352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.341 [2024-05-15 10:29:34.133133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.286 10:29:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.548 nvme0n1 00:35:49.810 10:29:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:49.810 10:29:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:49.810 Running I/O for 2 seconds... 00:35:51.731 00:35:51.731 Latency(us) 00:35:51.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.731 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:51.731 nvme0n1 : 2.00 20847.00 81.43 0.00 0.00 6131.78 3372.37 21626.88 00:35:51.731 =================================================================================================================== 00:35:51.731 Total : 20847.00 81.43 0.00 0.00 6131.78 3372.37 21626.88 00:35:51.731 0 00:35:51.731 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:51.731 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:51.731 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:51.731 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:51.731 | select(.opcode=="crc32c") 00:35:51.731 | "\(.module_name) \(.executed)"' 00:35:51.731 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3073429 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 3073429 ']' 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 3073429 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3073429 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3073429' 00:35:51.993 killing process with pid 3073429 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 3073429 00:35:51.993 Received shutdown signal, test time was about 2.000000 seconds 00:35:51.993 00:35:51.993 Latency(us) 00:35:51.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:51.993 =================================================================================================================== 00:35:51.993 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:51.993 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 3073429 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3074106 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3074106 /var/tmp/bperf.sock 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 3074106 ']' 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:52.255 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:52.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:52.256 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:52.256 10:29:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:52.256 [2024-05-15 10:29:37.836578] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:52.256 [2024-05-15 10:29:37.836630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074106 ] 00:35:52.256 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:52.256 Zero copy mechanism will not be used. 00:35:52.256 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.256 [2024-05-15 10:29:37.911931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.256 [2024-05-15 10:29:37.941316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.830 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:52.830 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:35:52.830 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:52.830 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:52.830 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:53.092 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.092 10:29:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.354 nvme0n1 00:35:53.354 10:29:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:53.354 10:29:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:53.616 Zero copy mechanism will not be used. 00:35:53.616 Running I/O for 2 seconds... 00:35:55.535 00:35:55.535 Latency(us) 00:35:55.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.535 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:55.535 nvme0n1 : 2.01 2296.83 287.10 0.00 0.00 6962.27 4560.21 13653.33 00:35:55.536 =================================================================================================================== 00:35:55.536 Total : 2296.83 287.10 0.00 0.00 6962.27 4560.21 13653.33 00:35:55.536 0 00:35:55.536 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:55.536 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:55.536 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:55.536 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:55.536 | select(.opcode=="crc32c") 00:35:55.536 | "\(.module_name) \(.executed)"' 00:35:55.536 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3074106 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 3074106 ']' 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 3074106 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3074106 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3074106' 00:35:55.798 killing process with pid 3074106 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 3074106 00:35:55.798 Received shutdown signal, test time was about 2.000000 seconds 00:35:55.798 00:35:55.798 Latency(us) 00:35:55.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.798 =================================================================================================================== 00:35:55.798 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 3074106 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3074797 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3074797 /var/tmp/bperf.sock 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 3074797 ']' 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:55.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:55.798 10:29:41 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:56.060 [2024-05-15 10:29:41.608383] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:56.060 [2024-05-15 10:29:41.608437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074797 ] 00:35:56.060 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.060 [2024-05-15 10:29:41.683476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.060 [2024-05-15 10:29:41.710715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.632 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:35:56.632 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:35:56.632 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:56.632 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:56.632 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:56.923 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.923 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:57.185 nvme0n1 00:35:57.185 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:57.185 10:29:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:57.185 Running I/O for 2 seconds... 00:35:59.739 00:35:59.739 Latency(us) 00:35:59.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.739 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:59.739 nvme0n1 : 2.01 21256.70 83.03 0.00 0.00 6009.75 3386.03 22937.60 00:35:59.739 =================================================================================================================== 00:35:59.739 Total : 21256.70 83.03 0.00 0.00 6009.75 3386.03 22937.60 00:35:59.739 0 00:35:59.739 10:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:59.739 10:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:59.739 10:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:59.739 10:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:59.739 | select(.opcode=="crc32c") 00:35:59.739 | "\(.module_name) \(.executed)"' 00:35:59.739 10:29:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3074797 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 3074797 ']' 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 3074797 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3074797 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3074797' 00:35:59.739 killing process with pid 3074797 00:35:59.739 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 3074797 00:35:59.739 Received shutdown signal, test time was about 2.000000 seconds 00:35:59.739 00:35:59.739 Latency(us) 00:35:59.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.740 =================================================================================================================== 00:35:59.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 3074797 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3075481 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3075481 /var/tmp/bperf.sock 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 3075481 ']' 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:59.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:59.740 10:29:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:59.740 [2024-05-15 10:29:45.286860] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:35:59.740 [2024-05-15 10:29:45.286914] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075481 ] 00:35:59.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:59.740 Zero copy mechanism will not be used. 00:35:59.740 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.740 [2024-05-15 10:29:45.360064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.740 [2024-05-15 10:29:45.388175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.314 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:00.314 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:36:00.314 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:00.314 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:00.314 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:00.576 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:00.576 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:00.839 nvme0n1 00:36:00.839 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:00.839 10:29:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:01.100 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:01.100 Zero copy mechanism will not be used. 00:36:01.101 Running I/O for 2 seconds... 00:36:03.021 00:36:03.021 Latency(us) 00:36:03.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.021 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:03.021 nvme0n1 : 2.01 1489.48 186.19 0.00 0.00 10713.58 8519.68 33641.81 00:36:03.021 =================================================================================================================== 00:36:03.021 Total : 1489.48 186.19 0.00 0.00 10713.58 8519.68 33641.81 00:36:03.021 0 00:36:03.021 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:03.021 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:03.021 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:03.021 | select(.opcode=="crc32c") 00:36:03.021 | "\(.module_name) \(.executed)"' 00:36:03.021 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:03.021 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3075481 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 3075481 ']' 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 3075481 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3075481 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3075481' 00:36:03.282 killing process with pid 3075481 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 3075481 00:36:03.282 Received shutdown signal, test time was about 2.000000 seconds 00:36:03.282 00:36:03.282 Latency(us) 00:36:03.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.282 =================================================================================================================== 00:36:03.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:03.282 10:29:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 3075481 00:36:03.282 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3073096 00:36:03.282 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 3073096 ']' 00:36:03.282 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 3073096 00:36:03.282 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3073096 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3073096' 00:36:03.544 killing process with pid 3073096 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 3073096 00:36:03.544 [2024-05-15 10:29:49.130209] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 3073096 00:36:03.544 00:36:03.544 real 0m16.240s 00:36:03.544 user 0m31.469s 00:36:03.544 sys 0m3.263s 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:03.544 ************************************ 00:36:03.544 END TEST nvmf_digest_clean 00:36:03.544 ************************************ 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.544 ************************************ 00:36:03.544 START TEST nvmf_digest_error 00:36:03.544 ************************************ 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3076287 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3076287 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 3076287 ']' 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.544 10:29:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:03.806 [2024-05-15 10:29:49.384348] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:03.806 [2024-05-15 10:29:49.384396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.806 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.806 [2024-05-15 10:29:49.449198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.806 [2024-05-15 10:29:49.481886] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.806 [2024-05-15 10:29:49.481926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.806 [2024-05-15 10:29:49.481934] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.806 [2024-05-15 10:29:49.481941] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.806 [2024-05-15 10:29:49.481946] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.806 [2024-05-15 10:29:49.481968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.379 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:04.379 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:36:04.379 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:04.379 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:04.379 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.641 [2024-05-15 10:29:50.179958] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.641 null0 00:36:04.641 [2024-05-15 10:29:50.250276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.641 [2024-05-15 10:29:50.274270] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:04.641 [2024-05-15 10:29:50.274502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3076540 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3076540 /var/tmp/bperf.sock 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 3076540 ']' 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:04.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:04.641 10:29:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:04.641 [2024-05-15 10:29:50.325054] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:04.641 [2024-05-15 10:29:50.325102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076540 ] 00:36:04.641 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.641 [2024-05-15 10:29:50.400208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.641 [2024-05-15 10:29:50.428682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:05.585 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:05.853 nvme0n1 00:36:05.853 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:05.853 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:05.853 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:05.853 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:05.854 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:05.854 10:29:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:05.854 Running I/O for 2 seconds... 00:36:06.125 [2024-05-15 10:29:51.658325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.658357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.658365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.674711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.674730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.674737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.687662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.687680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.687687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.699784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.699802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.699809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.713374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.713392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.713398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.725871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.725892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.725899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.736479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.736496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.736503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.749247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.749264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.749270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.761844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.761861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.774044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.774060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.774067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.786910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.786926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.786933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.799149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.799166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.799172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.811217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.811234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.811240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.823974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.823990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.823997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.835460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.835477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.835483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.848313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.848336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.860636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.860652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.871672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.871688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.871695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.885381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.885398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.885405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.897584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.897601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.897607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.125 [2024-05-15 10:29:51.910009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.125 [2024-05-15 10:29:51.910025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.125 [2024-05-15 10:29:51.910032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.921615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.921631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.921638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.934373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.934390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.934401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.946305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.946322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.946328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.959209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.959226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.959233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.970898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.970915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.970922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.984282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.984301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.984308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:51.994975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:51.994992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:51.994999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.008154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.008171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.008177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.020845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.020862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.020868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.032079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.032096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.032102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.045247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.045266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.045272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.057147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.057163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.057169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.069444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.069460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.069466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.082253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.082269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.082276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.094417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.094439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.106597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.106613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.106619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.119166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.119182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.119189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.131213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.131229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.452 [2024-05-15 10:29:52.131235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.452 [2024-05-15 10:29:52.142938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.452 [2024-05-15 10:29:52.142954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.142961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.156252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.156269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.156275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.167061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.167078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.167084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.179277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.179297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.179303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.192245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.192261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.192267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.204403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.204419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.204425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.217447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.217463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.217469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.229396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.229412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.229418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.453 [2024-05-15 10:29:52.242253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.453 [2024-05-15 10:29:52.242270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.453 [2024-05-15 10:29:52.242276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.254498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.254515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.254525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.265736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.265752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.265758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.278358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.278375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.278381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.291617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.291634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.291640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.303685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.303702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.303708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.315807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.315823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.315829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.326947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.326963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.326970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.339788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.339805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.339812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.351449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.351466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.363380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.363397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.363403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.376157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.376173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.376180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.388545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.388561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.388567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.400258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.400275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.400281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.413239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.413256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.413262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.425942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.425959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.425965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.437879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.437896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.437902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.449657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.449673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.449679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.462726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.462742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.462752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.474377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.474393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.474399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.486525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.486542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.486548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.716 [2024-05-15 10:29:52.499105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.716 [2024-05-15 10:29:52.499122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.716 [2024-05-15 10:29:52.499128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.512067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.512083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.512089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.524180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.524196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.524202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.535257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.535273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.535280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.548276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.548295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.548302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.560045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.560062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.560068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.572761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.572781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.572787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.585959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.585976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.585982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.597209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.597225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.597232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.609543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.609558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.609564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.621810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.621826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.621832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.633830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.633846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.633852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.645869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.645886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.645892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.659197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.659213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.659219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.671460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.671477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.671483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.683682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.979 [2024-05-15 10:29:52.683698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.979 [2024-05-15 10:29:52.683704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.979 [2024-05-15 10:29:52.695496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.695513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.695519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.980 [2024-05-15 10:29:52.707948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.707965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.707971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.980 [2024-05-15 10:29:52.720005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.720021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.720027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.980 [2024-05-15 10:29:52.732074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.732090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.732096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.980 [2024-05-15 10:29:52.744482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.744499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.744505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.980 [2024-05-15 10:29:52.756439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.756455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.756461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:06.980 [2024-05-15 10:29:52.769353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:06.980 [2024-05-15 10:29:52.769370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.980 [2024-05-15 10:29:52.769376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.781528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.781545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.781559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.794063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.794085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.806012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.806029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.806036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.818020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.818036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.818043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.830158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.830175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.830182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.842840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.842857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.842863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.855018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.855036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.855043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.867196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.867212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.867219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.879180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.879196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.879203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.891198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.891213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.242 [2024-05-15 10:29:52.891219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.242 [2024-05-15 10:29:52.903310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.242 [2024-05-15 10:29:52.903327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.903333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.916525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.916542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.916548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.928691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.928708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.928714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.940037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.940053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.940060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.952537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.952554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.952560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.965732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.965749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.965755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.977463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.977480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.977486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:52.989769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:52.989785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:52.989794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:53.001939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:53.001956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:53.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:53.014672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:53.014688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:53.014694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.243 [2024-05-15 10:29:53.025875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.243 [2024-05-15 10:29:53.025892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.243 [2024-05-15 10:29:53.025899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.038855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.038872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.038878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.051454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.051471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.051478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.064269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.064286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.064297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.075178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.075196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.075202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.089001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.089018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.089024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.101244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.101271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.111610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.111627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.111633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.125528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.125544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.125550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.137606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.137623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.137629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.149855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.149872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.149878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.162097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.162114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.162120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.174234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.174251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.174257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.185986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.186003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.186010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.198058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.198076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.198082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.211465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.211481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.211488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.223380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.223396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.223402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.234965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.234983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.234989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.248744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.248761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.248767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.261095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.261112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.261118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.506 [2024-05-15 10:29:53.270778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.506 [2024-05-15 10:29:53.270795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.506 [2024-05-15 10:29:53.270801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.507 [2024-05-15 10:29:53.285339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.507 [2024-05-15 10:29:53.285356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.507 [2024-05-15 10:29:53.285362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.507 [2024-05-15 10:29:53.297382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.507 [2024-05-15 10:29:53.297399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.507 [2024-05-15 10:29:53.297405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.309901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.309918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.309927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.321994] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.322010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.322017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.335512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.335529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.335535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.346231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.346248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.346254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.357558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.357575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.357582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.370564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.370582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.370588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.383895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.383912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.383918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.395209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.395226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.395232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.408261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.408277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.408283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.420738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.420759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.420765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.431950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.431967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.431973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.444638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.444655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.444661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.456410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.456427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.456433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.468370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.468387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.468393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.480677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.480694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.480700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.493186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.493202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.493208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.505215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.505231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.505237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.517902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.517918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.517927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.529477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.529494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.529500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.542271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.542287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.542297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.769 [2024-05-15 10:29:53.554534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:07.769 [2024-05-15 10:29:53.554551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:07.769 [2024-05-15 10:29:53.554557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 [2024-05-15 10:29:53.566917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:08.031 [2024-05-15 10:29:53.566934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.031 [2024-05-15 10:29:53.566940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 [2024-05-15 10:29:53.578908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:08.031 [2024-05-15 10:29:53.578925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.031 [2024-05-15 10:29:53.578931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 [2024-05-15 10:29:53.591543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:08.031 [2024-05-15 10:29:53.591559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.031 [2024-05-15 10:29:53.591565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 [2024-05-15 10:29:53.603591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:08.031 [2024-05-15 10:29:53.603608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.031 [2024-05-15 10:29:53.603614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 [2024-05-15 10:29:53.616279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:08.031 [2024-05-15 10:29:53.616299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.031 [2024-05-15 10:29:53.616306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 [2024-05-15 10:29:53.628545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed4720) 00:36:08.031 [2024-05-15 10:29:53.628564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:08.031 [2024-05-15 10:29:53.628571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:08.031 00:36:08.031 Latency(us) 00:36:08.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.031 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:08.031 nvme0n1 : 2.00 20555.35 80.29 0.00 0.00 6219.19 3290.45 23920.64 00:36:08.031 =================================================================================================================== 00:36:08.031 Total : 20555.35 80.29 0.00 0.00 6219.19 3290.45 23920.64 00:36:08.031 0 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:08.031 | .driver_specific 00:36:08.031 | .nvme_error 00:36:08.031 | .status_code 00:36:08.031 | .command_transient_transport_error' 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3076540 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 3076540 ']' 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 3076540 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:08.031 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3076540 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3076540' 00:36:08.293 killing process with pid 3076540 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 3076540 00:36:08.293 Received shutdown signal, test time was about 2.000000 seconds 00:36:08.293 00:36:08.293 Latency(us) 00:36:08.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.293 =================================================================================================================== 00:36:08.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 3076540 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3077218 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3077218 /var/tmp/bperf.sock 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 3077218 ']' 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:08.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:08.293 10:29:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:08.293 [2024-05-15 10:29:54.019146] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:08.293 [2024-05-15 10:29:54.019202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077218 ] 00:36:08.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:08.293 Zero copy mechanism will not be used. 00:36:08.293 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.555 [2024-05-15 10:29:54.092488] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.555 [2024-05-15 10:29:54.120569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.128 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:09.128 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:36:09.128 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:09.128 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:09.389 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:09.389 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.389 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.389 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.389 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:09.389 10:29:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:09.651 nvme0n1 00:36:09.651 10:29:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:09.651 10:29:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:09.651 10:29:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.651 10:29:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:09.651 10:29:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:09.651 10:29:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:09.651 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:09.651 Zero copy mechanism will not be used. 00:36:09.651 Running I/O for 2 seconds... 00:36:09.651 [2024-05-15 10:29:55.328091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.328128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.328137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.344583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.344605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.344613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.360153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.360172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.360179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.376008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.376027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.376033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.392277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.392308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.392320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.408431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.408449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.408456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.424705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.424723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.424730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.651 [2024-05-15 10:29:55.441644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.651 [2024-05-15 10:29:55.441663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.651 [2024-05-15 10:29:55.441669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.457583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.457602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.457609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.473509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.473527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.473533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.489081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.489098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.489105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.505284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.505306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.505313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.520900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.520917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.520923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.536378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.536396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.536402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.552801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.552819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.552826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.568643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.568661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.568668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.584490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.584509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.584516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.600313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.600344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.616247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.616265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.616271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.631417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.631435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.631441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.647286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.647308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.647314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.663477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.663495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.663502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.679215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.679233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.679239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:09.914 [2024-05-15 10:29:55.694632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:09.914 [2024-05-15 10:29:55.694650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.914 [2024-05-15 10:29:55.694656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.176 [2024-05-15 10:29:55.710790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.176 [2024-05-15 10:29:55.710809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.176 [2024-05-15 10:29:55.710815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.176 [2024-05-15 10:29:55.726764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.176 [2024-05-15 10:29:55.726782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.176 [2024-05-15 10:29:55.726789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.176 [2024-05-15 10:29:55.742621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.176 [2024-05-15 10:29:55.742639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.176 [2024-05-15 10:29:55.742646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.176 [2024-05-15 10:29:55.758276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.176 [2024-05-15 10:29:55.758300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.176 [2024-05-15 10:29:55.758307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.176 [2024-05-15 10:29:55.773954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.176 [2024-05-15 10:29:55.773972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.176 [2024-05-15 10:29:55.773978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.176 [2024-05-15 10:29:55.790062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.176 [2024-05-15 10:29:55.790080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.790086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.806225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.806249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.806260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.820662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.820680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.820686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.835403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.835421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.835428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.851502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.851527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.851537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.867741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.867766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.867781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.884116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.884134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.884141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.900158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.900177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.900183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.916052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.916071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.916077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.931495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.931514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.931521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.946714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.946737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.946748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.177 [2024-05-15 10:29:55.962388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.177 [2024-05-15 10:29:55.962408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.177 [2024-05-15 10:29:55.962414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.439 [2024-05-15 10:29:55.978362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.439 [2024-05-15 10:29:55.978386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.439 [2024-05-15 10:29:55.978397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.439 [2024-05-15 10:29:55.993925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.439 [2024-05-15 10:29:55.993949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.439 [2024-05-15 10:29:55.993961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.010217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.010239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.010245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.025483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.025503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.025509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.040726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.040746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.040756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.055691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.055710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.071944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.071963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.071969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.087690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.087713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.087724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.102922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.102940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.102947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.119216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.119235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.119241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.135532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.135551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.135558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.151831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.151855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.151862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.167593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.167613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.167620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.183191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.183211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.183217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.198644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.198663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.198669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.213409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.213428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.213434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.440 [2024-05-15 10:29:56.229443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.440 [2024-05-15 10:29:56.229462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.440 [2024-05-15 10:29:56.229468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.245097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.245116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.245123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.260608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.260627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.260634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.276766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.276786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.276796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.292301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.292320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.292326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.307279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.307304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.307311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.323117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.323142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.323153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.338278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.338302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.338308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.353674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.353693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.353699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.369144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.369166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.369172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.384957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.384976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.401479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.401498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.401504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.417615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.417634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.417641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.433043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.433067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.433077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.449081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.449099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.449106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.465392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.465411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.465419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.703 [2024-05-15 10:29:56.481182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.703 [2024-05-15 10:29:56.481201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.703 [2024-05-15 10:29:56.481208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.497500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.497519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.497526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.513553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.513572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.513579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.528777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.528796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.528803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.544133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.544158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.544169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.560306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.560330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.560337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.576126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.576145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.576152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.592063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.592081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.592088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.607253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.607271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.607278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.623076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.623101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.623113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.638579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.638598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.638605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.653898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.653922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.653933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.667817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.667842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.667853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.681819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.681843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.681849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.696943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.696962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.696968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.712989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.713008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.713014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.728392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.728411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.728418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.744008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.744027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.744033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:10.966 [2024-05-15 10:29:56.759368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:10.966 [2024-05-15 10:29:56.759387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.966 [2024-05-15 10:29:56.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.774804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.774823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.774830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.790132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.790154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.790165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.805432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.805451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.805457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.821149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.821168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.821174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.836431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.836450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.836457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.851777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.851796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.851803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.868204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.868224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.868231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.883784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.883807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.883817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.899812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.899832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.899838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.916030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.916049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.916056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.931762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.931787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.931795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.947508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.947538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.963567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.963592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.963604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.979148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.979167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.979174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:56.994844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:56.994868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:56.994878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.229 [2024-05-15 10:29:57.010812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.229 [2024-05-15 10:29:57.010835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.229 [2024-05-15 10:29:57.010841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.027725] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.027750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.027761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.043879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.043897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.043903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.058754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.058779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.058791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.074159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.074178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.074185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.089464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.089483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.089489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.105516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.105535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.105541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.120696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.120714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.120721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.135819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.135837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.135844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.151473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.151494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.151500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.165599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.165618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.165625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.181290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.181321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.196259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.196278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.196285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.212451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.212476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.212491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.228369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.228387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.228394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.244539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.244557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.244564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.260094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.260113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.260119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:11.492 [2024-05-15 10:29:57.275703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.492 [2024-05-15 10:29:57.275727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.492 [2024-05-15 10:29:57.275737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:11.754 [2024-05-15 10:29:57.291271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.754 [2024-05-15 10:29:57.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.754 [2024-05-15 10:29:57.291301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:11.754 [2024-05-15 10:29:57.306855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1169280) 00:36:11.754 [2024-05-15 10:29:57.306873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.754 [2024-05-15 10:29:57.306880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:11.754 00:36:11.754 Latency(us) 00:36:11.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.754 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:11.754 nvme0n1 : 2.00 1973.65 246.71 0.00 0.00 8104.34 1802.24 17039.36 00:36:11.754 =================================================================================================================== 00:36:11.755 Total : 1973.65 246.71 0.00 0.00 8104.34 1802.24 17039.36 00:36:11.755 0 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:11.755 | .driver_specific 00:36:11.755 | .nvme_error 00:36:11.755 | .status_code 00:36:11.755 | .command_transient_transport_error' 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3077218 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 3077218 ']' 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 3077218 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:11.755 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3077218 00:36:12.023 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:12.023 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:12.023 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3077218' 00:36:12.023 killing process with pid 3077218 00:36:12.023 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 3077218 00:36:12.023 Received shutdown signal, test time was about 2.000000 seconds 00:36:12.023 00:36:12.023 Latency(us) 00:36:12.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.023 =================================================================================================================== 00:36:12.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 3077218 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3077904 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3077904 /var/tmp/bperf.sock 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 3077904 ']' 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:12.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:12.024 10:29:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:12.024 [2024-05-15 10:29:57.708421] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:12.024 [2024-05-15 10:29:57.708481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077904 ] 00:36:12.024 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.024 [2024-05-15 10:29:57.784151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.024 [2024-05-15 10:29:57.811694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.010 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:13.272 nvme0n1 00:36:13.272 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:13.272 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:13.272 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:13.272 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:13.272 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:13.272 10:29:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:13.272 Running I/O for 2 seconds... 00:36:13.272 [2024-05-15 10:29:59.013216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190feb58 00:36:13.272 [2024-05-15 10:29:59.013940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.272 [2024-05-15 10:29:59.013965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:36:13.272 [2024-05-15 10:29:59.026036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e3d08 00:36:13.272 [2024-05-15 10:29:59.027038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.272 [2024-05-15 10:29:59.027056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.272 [2024-05-15 10:29:59.039343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.272 [2024-05-15 10:29:59.041075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.272 [2024-05-15 10:29:59.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.272 [2024-05-15 10:29:59.051083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.272 [2024-05-15 10:29:59.052899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.272 [2024-05-15 10:29:59.052918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.272 [2024-05-15 10:29:59.062848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.272 [2024-05-15 10:29:59.064694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.272 [2024-05-15 10:29:59.064710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.074609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.076431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.076446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.086334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.088169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.098051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.099773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.099789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.109774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.111591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.111607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.121533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.123354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.123370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.133262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.135103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.135119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.144959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.146779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.146794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.156669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.158463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.158479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.168373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.170178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.170193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.180090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.181903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.181918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.191808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.193664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.193680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.203523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.205325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.205340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.215238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.217048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.217064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.226949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.535 [2024-05-15 10:29:59.228785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.535 [2024-05-15 10:29:59.228800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.535 [2024-05-15 10:29:59.238674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.535 [2024-05-15 10:29:59.240483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.240499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.250428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.536 [2024-05-15 10:29:59.252234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.252249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.262140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.536 [2024-05-15 10:29:59.263962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.263977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.273876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.536 [2024-05-15 10:29:59.275670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.275686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.285588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.536 [2024-05-15 10:29:59.287400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.287415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.297300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.536 [2024-05-15 10:29:59.299121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.299136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.308991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.536 [2024-05-15 10:29:59.310814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.310829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.536 [2024-05-15 10:29:59.320698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.536 [2024-05-15 10:29:59.322513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.536 [2024-05-15 10:29:59.322529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.332424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.334248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.334263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.344120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.345958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.345973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.355818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.357652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.357669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.367501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.369321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.369336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.379192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.381000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.381015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.390897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.392721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.392736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.402712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.404557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.404572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.414418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.416211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.416226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.426100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.427919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.437791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.439560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.439575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.449499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.451304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.451319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.461184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.463023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.463038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.472948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.474767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.484654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.486477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.486492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.496357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.498171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.498185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.508051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.509876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.509891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.519776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.521615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.521631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.531478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f2d80 00:36:13.799 [2024-05-15 10:29:59.533311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.533326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.543007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:13.799 [2024-05-15 10:29:59.545288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.545308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.554650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:13.799 [2024-05-15 10:29:59.556297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.556312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.566376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:13.799 [2024-05-15 10:29:59.568081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.568096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.578089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:13.799 [2024-05-15 10:29:59.579774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.579789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:13.799 [2024-05-15 10:29:59.589812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:13.799 [2024-05-15 10:29:59.591515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:13.799 [2024-05-15 10:29:59.591530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.601536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.062 [2024-05-15 10:29:59.603241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.603256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.613273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.062 [2024-05-15 10:29:59.614939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.614954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.624959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.062 [2024-05-15 10:29:59.626634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.626649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.636640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.062 [2024-05-15 10:29:59.638325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.638340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.648360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.062 [2024-05-15 10:29:59.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.650084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.660083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.062 [2024-05-15 10:29:59.661795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.661810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.671819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.062 [2024-05-15 10:29:59.673530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.673545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.683541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.062 [2024-05-15 10:29:59.685246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.695223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.062 [2024-05-15 10:29:59.696913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.696928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.706917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.062 [2024-05-15 10:29:59.708615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.708630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.718625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.062 [2024-05-15 10:29:59.720333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.720348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.730342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.062 [2024-05-15 10:29:59.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.732063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.742112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.062 [2024-05-15 10:29:59.743782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.743798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.062 [2024-05-15 10:29:59.753806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.062 [2024-05-15 10:29:59.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.062 [2024-05-15 10:29:59.755513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.765502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.063 [2024-05-15 10:29:59.767206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.767224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.777225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.063 [2024-05-15 10:29:59.778931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.778946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.788959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.063 [2024-05-15 10:29:59.790644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.790659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.800659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.063 [2024-05-15 10:29:59.802352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.802367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.812347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.063 [2024-05-15 10:29:59.814012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.814027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.824064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.063 [2024-05-15 10:29:59.825767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.825782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.835774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.063 [2024-05-15 10:29:59.837465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.837480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.063 [2024-05-15 10:29:59.847482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.063 [2024-05-15 10:29:59.849166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.063 [2024-05-15 10:29:59.849182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.325 [2024-05-15 10:29:59.859182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.325 [2024-05-15 10:29:59.860872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.325 [2024-05-15 10:29:59.860887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.325 [2024-05-15 10:29:59.870882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.325 [2024-05-15 10:29:59.872552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.325 [2024-05-15 10:29:59.872567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.325 [2024-05-15 10:29:59.882589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.326 [2024-05-15 10:29:59.884277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.884295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.894279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.326 [2024-05-15 10:29:59.895968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.895983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.905999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.326 [2024-05-15 10:29:59.907697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.907712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.917724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.326 [2024-05-15 10:29:59.919393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.919408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.929420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.326 [2024-05-15 10:29:59.931092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.931107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.941118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.326 [2024-05-15 10:29:59.942808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.942824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.952826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.326 [2024-05-15 10:29:59.954511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.954526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.964544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.326 [2024-05-15 10:29:59.966234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.966249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.976238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.326 [2024-05-15 10:29:59.977927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.977942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.987935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.326 [2024-05-15 10:29:59.989613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:29:59.989628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:29:59.999659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.326 [2024-05-15 10:30:00.001351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.001366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.012036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.326 [2024-05-15 10:30:00.013759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.013776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.023963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.326 [2024-05-15 10:30:00.025637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.025653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.035679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.326 [2024-05-15 10:30:00.037371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.037386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.047407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.326 [2024-05-15 10:30:00.049077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.049098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.059127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.326 [2024-05-15 10:30:00.060788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.060805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.070873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.326 [2024-05-15 10:30:00.072571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.072590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.082591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.326 [2024-05-15 10:30:00.084254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.084270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.094361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.326 [2024-05-15 10:30:00.096075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.096090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.106071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.326 [2024-05-15 10:30:00.107763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.107779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.326 [2024-05-15 10:30:00.117786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.326 [2024-05-15 10:30:00.119490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.326 [2024-05-15 10:30:00.119506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.589 [2024-05-15 10:30:00.129519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.589 [2024-05-15 10:30:00.131073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.589 [2024-05-15 10:30:00.131089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.589 [2024-05-15 10:30:00.141243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.589 [2024-05-15 10:30:00.142927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.589 [2024-05-15 10:30:00.142943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.589 [2024-05-15 10:30:00.152951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.589 [2024-05-15 10:30:00.154615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.589 [2024-05-15 10:30:00.154631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.589 [2024-05-15 10:30:00.164658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.589 [2024-05-15 10:30:00.166348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.589 [2024-05-15 10:30:00.166364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.589 [2024-05-15 10:30:00.176346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.589 [2024-05-15 10:30:00.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.589 [2024-05-15 10:30:00.178033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.188052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.590 [2024-05-15 10:30:00.189750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.189766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.199757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.590 [2024-05-15 10:30:00.201466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.201482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.211471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.590 [2024-05-15 10:30:00.213181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.213198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.223192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.590 [2024-05-15 10:30:00.224886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.224903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.234899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.590 [2024-05-15 10:30:00.236573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.236590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.246629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.590 [2024-05-15 10:30:00.248332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.248348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.258348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.590 [2024-05-15 10:30:00.260025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.260041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.270045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.590 [2024-05-15 10:30:00.271698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.271714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.281784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.590 [2024-05-15 10:30:00.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.283503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.293510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.590 [2024-05-15 10:30:00.295200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.295216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.305216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.590 [2024-05-15 10:30:00.306926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.306942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.316934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.590 [2024-05-15 10:30:00.318627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.318644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.328640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.590 [2024-05-15 10:30:00.330306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.330322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.340335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.590 [2024-05-15 10:30:00.341935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.341951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.352067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.590 [2024-05-15 10:30:00.353710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.353726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.363769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.590 [2024-05-15 10:30:00.365468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.365484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.590 [2024-05-15 10:30:00.375454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.590 [2024-05-15 10:30:00.377132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.590 [2024-05-15 10:30:00.377153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.387199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.853 [2024-05-15 10:30:00.388894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.388910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.398913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.853 [2024-05-15 10:30:00.400672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.400687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.410746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.853 [2024-05-15 10:30:00.412417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.412433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.422483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.853 [2024-05-15 10:30:00.424169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.424186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.434203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.853 [2024-05-15 10:30:00.435893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.435909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.445908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.853 [2024-05-15 10:30:00.447631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.447647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.457602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.853 [2024-05-15 10:30:00.459310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.459325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.469328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.853 [2024-05-15 10:30:00.471014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.471030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.853 [2024-05-15 10:30:00.481018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.853 [2024-05-15 10:30:00.482698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.853 [2024-05-15 10:30:00.482714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.492745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.854 [2024-05-15 10:30:00.494449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.494465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.504478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.854 [2024-05-15 10:30:00.506082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.506098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.516194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.854 [2024-05-15 10:30:00.517881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.517897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.527912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.854 [2024-05-15 10:30:00.529558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.529574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.539626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.854 [2024-05-15 10:30:00.541308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.541324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.551352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.854 [2024-05-15 10:30:00.553058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.553074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.563094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.854 [2024-05-15 10:30:00.564774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.564790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.574809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.854 [2024-05-15 10:30:00.576514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.576530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.586517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.854 [2024-05-15 10:30:00.588210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.588226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.598213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:14.854 [2024-05-15 10:30:00.599916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.599932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.609925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:14.854 [2024-05-15 10:30:00.611605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.611621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.621625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:14.854 [2024-05-15 10:30:00.623337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.623353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.633384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:14.854 [2024-05-15 10:30:00.635099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.635115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:14.854 [2024-05-15 10:30:00.645124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:14.854 [2024-05-15 10:30:00.646844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:14.854 [2024-05-15 10:30:00.646860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.116 [2024-05-15 10:30:00.656881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:15.116 [2024-05-15 10:30:00.658571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.658586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.668581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:15.117 [2024-05-15 10:30:00.670286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.670305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.680321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:15.117 [2024-05-15 10:30:00.682027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.682046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.692033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:15.117 [2024-05-15 10:30:00.693735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.693751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.703742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:15.117 [2024-05-15 10:30:00.705458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.705474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.715423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:15.117 [2024-05-15 10:30:00.717113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.717129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.727120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:15.117 [2024-05-15 10:30:00.728782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.728799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.738846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:15.117 [2024-05-15 10:30:00.740543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.740559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.750532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:15.117 [2024-05-15 10:30:00.752227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.752243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.762341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:15.117 [2024-05-15 10:30:00.764016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.764032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.774036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:15.117 [2024-05-15 10:30:00.775716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.775731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.785755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:15.117 [2024-05-15 10:30:00.787413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.787429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.797476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:15.117 [2024-05-15 10:30:00.799166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.799182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.809172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:15.117 [2024-05-15 10:30:00.810852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.810868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.820864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:15.117 [2024-05-15 10:30:00.822548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.822564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.832567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:15.117 [2024-05-15 10:30:00.834251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.834266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.844249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:15.117 [2024-05-15 10:30:00.845952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.845968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.855949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:15.117 [2024-05-15 10:30:00.857609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.857625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.867684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:15.117 [2024-05-15 10:30:00.869390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.879398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:15.117 [2024-05-15 10:30:00.881093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.891089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:15.117 [2024-05-15 10:30:00.892680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.892696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.117 [2024-05-15 10:30:00.902781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:15.117 [2024-05-15 10:30:00.904447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.117 [2024-05-15 10:30:00.904462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.914485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:15.379 [2024-05-15 10:30:00.916173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.916189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.926177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:15.379 [2024-05-15 10:30:00.927870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.927885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.937863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fda78 00:36:15.379 [2024-05-15 10:30:00.939563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.939579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.949555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190e49b0 00:36:15.379 [2024-05-15 10:30:00.951246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.951261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.961258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190f0ff8 00:36:15.379 [2024-05-15 10:30:00.962971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.962987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.972971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190fc560 00:36:15.379 [2024-05-15 10:30:00.974688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.974704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 [2024-05-15 10:30:00.984677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336800) with pdu=0x2000190ee5c8 00:36:15.379 [2024-05-15 10:30:00.986367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.379 [2024-05-15 10:30:00.986386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.379 00:36:15.379 Latency(us) 00:36:15.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.379 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:15.379 nvme0n1 : 2.01 21544.64 84.16 0.00 0.00 5932.51 4396.37 26105.17 00:36:15.379 =================================================================================================================== 00:36:15.379 Total : 21544.64 84.16 0.00 0.00 5932.51 4396.37 26105.17 00:36:15.379 0 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:15.379 | .driver_specific 00:36:15.379 | .nvme_error 00:36:15.379 | .status_code 00:36:15.379 | .command_transient_transport_error' 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 169 > 0 )) 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3077904 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 3077904 ']' 00:36:15.379 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 3077904 00:36:15.641 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3077904 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3077904' 00:36:15.642 killing process with pid 3077904 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 3077904 00:36:15.642 Received shutdown signal, test time was about 2.000000 seconds 00:36:15.642 00:36:15.642 Latency(us) 00:36:15.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.642 =================================================================================================================== 00:36:15.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 3077904 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3078671 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3078671 /var/tmp/bperf.sock 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 3078671 ']' 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:15.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:15.642 10:30:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:15.642 [2024-05-15 10:30:01.383471] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:15.642 [2024-05-15 10:30:01.383525] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078671 ] 00:36:15.642 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:15.642 Zero copy mechanism will not be used. 00:36:15.642 EAL: No free 2048 kB hugepages reported on node 1 00:36:15.904 [2024-05-15 10:30:01.458720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.904 [2024-05-15 10:30:01.486963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.487 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:16.487 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:36:16.487 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:16.487 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:16.748 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:16.748 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:16.748 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:16.748 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:16.748 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:16.748 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:17.009 nvme0n1 00:36:17.009 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:17.009 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:17.009 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:17.009 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:17.009 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:17.009 10:30:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:17.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:17.009 Zero copy mechanism will not be used. 00:36:17.009 Running I/O for 2 seconds... 00:36:17.271 [2024-05-15 10:30:02.842397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.842959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.842991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.865260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.865736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.885414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.885752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.885770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.905682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.906186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.906205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.925633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.926249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.926267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.946445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.946976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.946994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.965545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.966163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.966180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:02.986355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:02.986720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:02.986738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:03.007933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:03.008230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.271 [2024-05-15 10:30:03.008248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.271 [2024-05-15 10:30:03.029013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.271 [2024-05-15 10:30:03.029462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.272 [2024-05-15 10:30:03.029480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.272 [2024-05-15 10:30:03.049886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.272 [2024-05-15 10:30:03.050411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.272 [2024-05-15 10:30:03.050429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.070837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.071288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.071309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.089951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.090378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.090396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.110771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.111160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.111177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.132104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.132490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.132508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.152575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.153165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.153183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.173249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.173768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.173785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.193716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.194025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.194042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.215419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.215706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.215724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.236073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.236449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.236466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.257517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.258035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.258052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.276587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.277043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.296156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.296696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.296712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.532 [2024-05-15 10:30:03.318803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.532 [2024-05-15 10:30:03.319150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.532 [2024-05-15 10:30:03.319167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.341019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.341387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.341404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.362214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.362588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.362606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.384045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.384416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.384436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.404527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.405099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.405116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.424233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.424681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.424699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.446128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.446568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.446585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.465959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.466341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.466357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.487885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.488255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.488271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.509050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.509365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.509382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.531779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.532161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.532178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.552990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.553431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.553448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:17.794 [2024-05-15 10:30:03.575903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:17.794 [2024-05-15 10:30:03.576358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:17.794 [2024-05-15 10:30:03.576375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.599324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.599768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.599786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.622109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.622612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.622630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.645530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.646035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.646052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.666809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.667416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.667433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.688395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.688623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.688638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.709381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.709665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.709682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.731618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.732138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.732154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.751884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.752340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.752361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.773865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.774561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.774577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.793756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.794303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.794320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.816105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.816411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.816428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.057 [2024-05-15 10:30:03.836589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.057 [2024-05-15 10:30:03.837048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.057 [2024-05-15 10:30:03.837064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.857949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.858479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.858496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.878645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.879338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.879356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.901570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.902025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.902043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.922652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.923160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.923177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.943847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.944215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.944233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.962933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.963251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.963269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:03.984185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:03.984592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:03.984610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:04.004452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:04.004891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:04.004909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:04.026400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:04.026777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:04.026793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:04.048550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:04.048945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:04.048963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:04.069543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:04.069959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:04.069976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.320 [2024-05-15 10:30:04.091198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.320 [2024-05-15 10:30:04.091822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.320 [2024-05-15 10:30:04.091839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.582 [2024-05-15 10:30:04.114889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.582 [2024-05-15 10:30:04.115309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.582 [2024-05-15 10:30:04.115327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.582 [2024-05-15 10:30:04.134310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.582 [2024-05-15 10:30:04.134594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.582 [2024-05-15 10:30:04.134611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.582 [2024-05-15 10:30:04.155655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.582 [2024-05-15 10:30:04.156032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.582 [2024-05-15 10:30:04.156050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.582 [2024-05-15 10:30:04.178692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.179199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.179216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.201012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.201530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.201547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.223078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.223599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.223615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.245815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.246250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.246267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.265538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.265842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.265858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.286299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.286752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.286769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.309064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.309564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.309585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.331261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.331727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.331745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.354626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.583 [2024-05-15 10:30:04.355166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.583 [2024-05-15 10:30:04.355182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.583 [2024-05-15 10:30:04.376590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.377274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.377295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.397412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.397704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.397723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.418062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.418348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.418365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.440450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.440996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.441012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.463381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.463869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.463886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.484587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.484970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.484988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.506320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.506691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.506707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.527377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.527746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.527763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.549134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.549715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.549732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.571555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.572014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.591523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.591821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.591838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.613752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.614204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.614221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:18.845 [2024-05-15 10:30:04.634601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:18.845 [2024-05-15 10:30:04.635132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:18.845 [2024-05-15 10:30:04.635147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.656024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.656404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.656420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.676302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.676752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.676769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.696350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.696712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.696730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.717512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.717960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.717978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.737627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.737961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.737978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.759655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.760022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.760039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:19.107 [2024-05-15 10:30:04.780265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2336b40) with pdu=0x2000190fef90 00:36:19.107 [2024-05-15 10:30:04.780778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:19.107 [2024-05-15 10:30:04.780795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:19.107 00:36:19.107 Latency(us) 00:36:19.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.107 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:19.107 nvme0n1 : 2.01 1434.76 179.35 0.00 0.00 11124.30 8465.07 41724.59 00:36:19.107 =================================================================================================================== 00:36:19.107 Total : 1434.76 179.35 0.00 0.00 11124.30 8465.07 41724.59 00:36:19.108 0 00:36:19.108 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:19.108 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:19.108 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:19.108 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:19.108 | .driver_specific 00:36:19.108 | .nvme_error 00:36:19.108 | .status_code 00:36:19.108 | .command_transient_transport_error' 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 92 > 0 )) 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3078671 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 3078671 ']' 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 3078671 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:19.369 10:30:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3078671 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3078671' 00:36:19.369 killing process with pid 3078671 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 3078671 00:36:19.369 Received shutdown signal, test time was about 2.000000 seconds 00:36:19.369 00:36:19.369 Latency(us) 00:36:19.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.369 =================================================================================================================== 00:36:19.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 3078671 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3076287 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 3076287 ']' 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 3076287 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:19.369 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3076287 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3076287' 00:36:19.632 killing process with pid 3076287 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 3076287 00:36:19.632 [2024-05-15 10:30:05.207595] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 3076287 00:36:19.632 00:36:19.632 real 0m15.997s 00:36:19.632 user 0m31.639s 00:36:19.632 sys 0m3.007s 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:19.632 ************************************ 00:36:19.632 END TEST nvmf_digest_error 00:36:19.632 ************************************ 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:19.632 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:19.632 rmmod nvme_tcp 00:36:19.632 rmmod nvme_fabrics 00:36:19.632 rmmod nvme_keyring 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3076287 ']' 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3076287 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 3076287 ']' 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 3076287 00:36:19.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3076287) - No such process 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 3076287 is not found' 00:36:19.894 Process with pid 3076287 is not found 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:19.894 10:30:05 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.814 10:30:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:21.814 00:36:21.814 real 0m41.372s 00:36:21.814 user 1m5.024s 00:36:21.814 sys 0m11.391s 00:36:21.814 10:30:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:21.814 10:30:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:21.814 ************************************ 00:36:21.814 END TEST nvmf_digest 00:36:21.814 ************************************ 00:36:21.814 10:30:07 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:36:21.814 10:30:07 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:36:21.814 10:30:07 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:36:21.814 10:30:07 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:21.814 10:30:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:21.814 10:30:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:21.814 10:30:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:22.077 ************************************ 00:36:22.077 START TEST nvmf_bdevperf 00:36:22.077 ************************************ 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:22.077 * Looking for test storage... 00:36:22.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:22.077 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:22.078 10:30:07 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:22.078 10:30:07 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:30.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:30.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:30.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.240 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:30.241 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:30.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:30.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:36:30.241 00:36:30.241 --- 10.0.0.2 ping statistics --- 00:36:30.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.241 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:30.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:30.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.443 ms 00:36:30.241 00:36:30.241 --- 10.0.0.1 ping statistics --- 00:36:30.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:30.241 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3084044 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3084044 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 3084044 ']' 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:30.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:30.241 10:30:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.241 [2024-05-15 10:30:14.998278] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:30.241 [2024-05-15 10:30:14.998333] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:30.241 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.241 [2024-05-15 10:30:15.080422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:30.241 [2024-05-15 10:30:15.112677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:30.241 [2024-05-15 10:30:15.112714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:30.241 [2024-05-15 10:30:15.112722] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:30.241 [2024-05-15 10:30:15.112729] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:30.241 [2024-05-15 10:30:15.112735] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:30.241 [2024-05-15 10:30:15.112838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:30.241 [2024-05-15 10:30:15.112993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.241 [2024-05-15 10:30:15.112994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.241 [2024-05-15 10:30:15.818157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.241 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.242 Malloc0 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:30.242 [2024-05-15 10:30:15.884414] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:30.242 [2024-05-15 10:30:15.884627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:30.242 { 00:36:30.242 "params": { 00:36:30.242 "name": "Nvme$subsystem", 00:36:30.242 "trtype": "$TEST_TRANSPORT", 00:36:30.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.242 "adrfam": "ipv4", 00:36:30.242 "trsvcid": "$NVMF_PORT", 00:36:30.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.242 "hdgst": ${hdgst:-false}, 00:36:30.242 "ddgst": ${ddgst:-false} 00:36:30.242 }, 00:36:30.242 "method": "bdev_nvme_attach_controller" 00:36:30.242 } 00:36:30.242 EOF 00:36:30.242 )") 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:30.242 10:30:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:30.242 "params": { 00:36:30.242 "name": "Nvme1", 00:36:30.242 "trtype": "tcp", 00:36:30.242 "traddr": "10.0.0.2", 00:36:30.242 "adrfam": "ipv4", 00:36:30.242 "trsvcid": "4420", 00:36:30.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:30.242 "hdgst": false, 00:36:30.242 "ddgst": false 00:36:30.242 }, 00:36:30.242 "method": "bdev_nvme_attach_controller" 00:36:30.242 }' 00:36:30.242 [2024-05-15 10:30:15.938969] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:30.242 [2024-05-15 10:30:15.939023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084191 ] 00:36:30.242 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.242 [2024-05-15 10:30:15.988230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.242 [2024-05-15 10:30:16.016851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.504 Running I/O for 1 seconds... 00:36:31.894 00:36:31.894 Latency(us) 00:36:31.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.894 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:31.894 Verification LBA range: start 0x0 length 0x4000 00:36:31.894 Nvme1n1 : 1.01 9385.64 36.66 0.00 0.00 13577.09 2143.57 30146.56 00:36:31.894 =================================================================================================================== 00:36:31.894 Total : 9385.64 36.66 0.00 0.00 13577.09 2143.57 30146.56 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3084524 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:31.894 { 00:36:31.894 "params": { 00:36:31.894 "name": "Nvme$subsystem", 00:36:31.894 "trtype": "$TEST_TRANSPORT", 00:36:31.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:31.894 "adrfam": "ipv4", 00:36:31.894 "trsvcid": "$NVMF_PORT", 00:36:31.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:31.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:31.894 "hdgst": ${hdgst:-false}, 00:36:31.894 "ddgst": ${ddgst:-false} 00:36:31.894 }, 00:36:31.894 "method": "bdev_nvme_attach_controller" 00:36:31.894 } 00:36:31.894 EOF 00:36:31.894 )") 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:31.894 10:30:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:31.894 "params": { 00:36:31.894 "name": "Nvme1", 00:36:31.894 "trtype": "tcp", 00:36:31.894 "traddr": "10.0.0.2", 00:36:31.894 "adrfam": "ipv4", 00:36:31.894 "trsvcid": "4420", 00:36:31.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.894 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:31.894 "hdgst": false, 00:36:31.894 "ddgst": false 00:36:31.894 }, 00:36:31.894 "method": "bdev_nvme_attach_controller" 00:36:31.894 }' 00:36:31.894 [2024-05-15 10:30:17.452530] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:31.894 [2024-05-15 10:30:17.452590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3084524 ] 00:36:31.894 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.895 [2024-05-15 10:30:17.511048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:31.895 [2024-05-15 10:30:17.539860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:32.168 Running I/O for 15 seconds... 00:36:34.769 10:30:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3084044 00:36:34.769 10:30:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:34.769 [2024-05-15 10:30:20.420330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.769 [2024-05-15 10:30:20.420629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.769 [2024-05-15 10:30:20.420636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.420985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.420994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.770 [2024-05-15 10:30:20.421104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.770 [2024-05-15 10:30:20.421275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.770 [2024-05-15 10:30:20.421283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.771 [2024-05-15 10:30:20.421925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.771 [2024-05-15 10:30:20.421942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.771 [2024-05-15 10:30:20.421953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.421960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.421969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.421976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.421986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.421994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:34.772 [2024-05-15 10:30:20.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.772 [2024-05-15 10:30:20.422579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.772 [2024-05-15 10:30:20.422587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.773 [2024-05-15 10:30:20.422597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.773 [2024-05-15 10:30:20.422605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.773 [2024-05-15 10:30:20.422614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.773 [2024-05-15 10:30:20.422621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.773 [2024-05-15 10:30:20.422631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.773 [2024-05-15 10:30:20.422638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.773 [2024-05-15 10:30:20.422648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:34.773 [2024-05-15 10:30:20.422655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.773 [2024-05-15 10:30:20.422664] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1209f00 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.422673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:34.773 [2024-05-15 10:30:20.422679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:34.773 [2024-05-15 10:30:20.422685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93344 len:8 PRP1 0x0 PRP2 0x0 00:36:34.773 [2024-05-15 10:30:20.422694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:34.773 [2024-05-15 10:30:20.422732] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1209f00 was disconnected and freed. reset controller. 00:36:34.773 [2024-05-15 10:30:20.426324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.426372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.427557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.428102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.428116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.773 [2024-05-15 10:30:20.428126] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.428377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.428604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.773 [2024-05-15 10:30:20.428612] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.773 [2024-05-15 10:30:20.428621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.773 [2024-05-15 10:30:20.432218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.773 [2024-05-15 10:30:20.440479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.441283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.441850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.441893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.773 [2024-05-15 10:30:20.441904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.442147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.442382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.773 [2024-05-15 10:30:20.442391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.773 [2024-05-15 10:30:20.442399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.773 [2024-05-15 10:30:20.445995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.773 [2024-05-15 10:30:20.454474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.455248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.455929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.455967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.773 [2024-05-15 10:30:20.455978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.456220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.456452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.773 [2024-05-15 10:30:20.456462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.773 [2024-05-15 10:30:20.456469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.773 [2024-05-15 10:30:20.460084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.773 [2024-05-15 10:30:20.468366] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.469132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.469741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.469779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.773 [2024-05-15 10:30:20.469792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.470035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.470261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.773 [2024-05-15 10:30:20.470270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.773 [2024-05-15 10:30:20.470277] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.773 [2024-05-15 10:30:20.473884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.773 [2024-05-15 10:30:20.482369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.483181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.483677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.483715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.773 [2024-05-15 10:30:20.483733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.483975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.484202] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.773 [2024-05-15 10:30:20.484211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.773 [2024-05-15 10:30:20.484219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.773 [2024-05-15 10:30:20.487826] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.773 [2024-05-15 10:30:20.496315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.496917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.497560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.497597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.773 [2024-05-15 10:30:20.497608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.773 [2024-05-15 10:30:20.497849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.773 [2024-05-15 10:30:20.498075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.773 [2024-05-15 10:30:20.498085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.773 [2024-05-15 10:30:20.498093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.773 [2024-05-15 10:30:20.501704] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.773 [2024-05-15 10:30:20.510190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.773 [2024-05-15 10:30:20.510974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.511662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.773 [2024-05-15 10:30:20.511700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.774 [2024-05-15 10:30:20.511713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.774 [2024-05-15 10:30:20.511956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.774 [2024-05-15 10:30:20.512183] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.774 [2024-05-15 10:30:20.512191] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.774 [2024-05-15 10:30:20.512199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.774 [2024-05-15 10:30:20.515807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.774 [2024-05-15 10:30:20.524089] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.774 [2024-05-15 10:30:20.524882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.774 [2024-05-15 10:30:20.525526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.774 [2024-05-15 10:30:20.525563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.774 [2024-05-15 10:30:20.525574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.774 [2024-05-15 10:30:20.525820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.774 [2024-05-15 10:30:20.526046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.774 [2024-05-15 10:30:20.526055] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.774 [2024-05-15 10:30:20.526062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.774 [2024-05-15 10:30:20.529669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.774 [2024-05-15 10:30:20.537942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.774 [2024-05-15 10:30:20.538726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.774 [2024-05-15 10:30:20.539270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.774 [2024-05-15 10:30:20.539281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.774 [2024-05-15 10:30:20.539289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.774 [2024-05-15 10:30:20.539518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.774 [2024-05-15 10:30:20.539740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.774 [2024-05-15 10:30:20.539749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.774 [2024-05-15 10:30:20.539756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.774 [2024-05-15 10:30:20.543354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:34.774 [2024-05-15 10:30:20.551836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:34.774 [2024-05-15 10:30:20.552728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.774 [2024-05-15 10:30:20.553487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:34.774 [2024-05-15 10:30:20.553525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:34.774 [2024-05-15 10:30:20.553536] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:34.774 [2024-05-15 10:30:20.553778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:34.774 [2024-05-15 10:30:20.554004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:34.774 [2024-05-15 10:30:20.554014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:34.774 [2024-05-15 10:30:20.554022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:34.774 [2024-05-15 10:30:20.557644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.038 [2024-05-15 10:30:20.565709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.038 [2024-05-15 10:30:20.566607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.038 [2024-05-15 10:30:20.567189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.038 [2024-05-15 10:30:20.567204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.038 [2024-05-15 10:30:20.567214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.038 [2024-05-15 10:30:20.567462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.038 [2024-05-15 10:30:20.567694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.038 [2024-05-15 10:30:20.567703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.038 [2024-05-15 10:30:20.567711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.038 [2024-05-15 10:30:20.571317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.038 [2024-05-15 10:30:20.579593] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.038 [2024-05-15 10:30:20.580481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.038 [2024-05-15 10:30:20.581057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.038 [2024-05-15 10:30:20.581072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.038 [2024-05-15 10:30:20.581082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.038 [2024-05-15 10:30:20.581330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.038 [2024-05-15 10:30:20.581557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.038 [2024-05-15 10:30:20.581566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.038 [2024-05-15 10:30:20.581574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.038 [2024-05-15 10:30:20.585178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.038 [2024-05-15 10:30:20.593456] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.038 [2024-05-15 10:30:20.594312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.038 [2024-05-15 10:30:20.594886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.038 [2024-05-15 10:30:20.594900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.594910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.595151] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.595385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.595395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.595403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.598996] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.607469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.608275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.608824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.608862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.608873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.609114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.609348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.609362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.609369] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.612961] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.621438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.622285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.622874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.622887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.622897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.623138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.623371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.623381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.623389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.626991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.635469] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.636322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.636893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.636907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.636916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.637157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.637390] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.637401] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.637408] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.641004] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.649486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.650289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.650847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.650858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.650866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.651088] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.651315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.651324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.651336] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.654932] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.663422] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.664218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.664851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.664889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.664900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.665142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.665379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.665389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.665397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.668993] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.677260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.678087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.678693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.678708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.678718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.678959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.679185] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.679195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.679203] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.682808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.691300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.692036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.692674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.692690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.692699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.692941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.693167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.693176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.693184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.696781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.705256] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.706061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.706689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.706726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.706737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.706978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.707204] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.707213] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.707221] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.710821] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.039 [2024-05-15 10:30:20.719289] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.039 [2024-05-15 10:30:20.720023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.720627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.039 [2024-05-15 10:30:20.720642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.039 [2024-05-15 10:30:20.720651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.039 [2024-05-15 10:30:20.720893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.039 [2024-05-15 10:30:20.721118] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.039 [2024-05-15 10:30:20.721127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.039 [2024-05-15 10:30:20.721135] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.039 [2024-05-15 10:30:20.724735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.733245] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.734009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.734640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.734677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.734688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.734930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.735156] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.735165] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.735173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.738779] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.747248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.747950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.748621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.748659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.748669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.748911] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.749137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.749146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.749154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.752757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.761239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.762144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.762750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.762764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.762774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.763015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.763241] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.763250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.763257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.766859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.775125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.776006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.776583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.776597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.776607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.776848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.777075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.777083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.777091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.780693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.789174] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.790059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.790648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.790663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.790672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.790914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.791139] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.791148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.791155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.794793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.803054] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.803822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.804438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.804453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.804462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.804704] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.804930] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.804939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.804946] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.808547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.817022] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.040 [2024-05-15 10:30:20.817924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.818515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.040 [2024-05-15 10:30:20.818530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.040 [2024-05-15 10:30:20.818539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.040 [2024-05-15 10:30:20.818780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.040 [2024-05-15 10:30:20.819006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.040 [2024-05-15 10:30:20.819014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.040 [2024-05-15 10:30:20.819022] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.040 [2024-05-15 10:30:20.822623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.040 [2024-05-15 10:30:20.830893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.304 [2024-05-15 10:30:20.831753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.304 [2024-05-15 10:30:20.832478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.304 [2024-05-15 10:30:20.832516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.304 [2024-05-15 10:30:20.832532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.304 [2024-05-15 10:30:20.832774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.304 [2024-05-15 10:30:20.833000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.304 [2024-05-15 10:30:20.833009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.304 [2024-05-15 10:30:20.833016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.304 [2024-05-15 10:30:20.836627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.304 [2024-05-15 10:30:20.844895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.304 [2024-05-15 10:30:20.845757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.304 [2024-05-15 10:30:20.846484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.304 [2024-05-15 10:30:20.846522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.304 [2024-05-15 10:30:20.846533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.304 [2024-05-15 10:30:20.846775] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.304 [2024-05-15 10:30:20.847001] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.304 [2024-05-15 10:30:20.847010] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.304 [2024-05-15 10:30:20.847018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.304 [2024-05-15 10:30:20.850623] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.304 [2024-05-15 10:30:20.858894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.304 [2024-05-15 10:30:20.859676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.304 [2024-05-15 10:30:20.860298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.304 [2024-05-15 10:30:20.860313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.304 [2024-05-15 10:30:20.860322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.304 [2024-05-15 10:30:20.860564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.304 [2024-05-15 10:30:20.860790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.304 [2024-05-15 10:30:20.860799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.304 [2024-05-15 10:30:20.860806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.304 [2024-05-15 10:30:20.864407] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.304 [2024-05-15 10:30:20.872882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.873742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.874304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.874319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.874329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.874574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.874800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.874809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.874816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.878418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.886889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.887623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.888228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.888242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.888251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.888501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.888729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.888737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.888744] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.892341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.900820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.901668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.902249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.902263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.902273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.902521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.902748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.902757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.902765] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.906364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.914839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.915730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.916305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.916319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.916329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.916570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.916801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.916810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.916817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.920421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.928712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.929594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.930215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.930229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.930239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.930489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.930716] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.930725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.930733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.934334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.942606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.943491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.944072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.944086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.944095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.944343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.944569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.944579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.944587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.948185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.956455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.957000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.957537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.957575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.957586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.957828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.958054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.958067] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.958075] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.961687] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.970381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.971145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.971790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.971828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.971839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.972081] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.972315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.972325] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.972333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.975929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.984395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.985203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.985853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.985890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.985901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.986143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:20.986377] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:20.986387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.305 [2024-05-15 10:30:20.986395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.305 [2024-05-15 10:30:20.989984] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.305 [2024-05-15 10:30:20.998246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.305 [2024-05-15 10:30:20.999124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.999733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.305 [2024-05-15 10:30:20.999748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.305 [2024-05-15 10:30:20.999758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.305 [2024-05-15 10:30:20.999999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.305 [2024-05-15 10:30:21.000225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.305 [2024-05-15 10:30:21.000234] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.000247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.004072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.012133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.012908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.013569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.013607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.013618] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.013860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.014086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.014095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.014103] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.017703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.026166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.027055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.027640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.027655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.027665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.027907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.028134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.028143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.028150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.031753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.040008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.040866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.041316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.041332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.041341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.041583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.041808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.041817] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.041825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.045435] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.053905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.054807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.055377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.055392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.055402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.055643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.055869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.055878] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.055886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.059496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.067756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.068654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.069235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.069248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.069258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.069508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.069734] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.069743] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.069750] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.073347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.081604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.082523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.083105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.083118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.083128] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.083379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.083605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.083614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.083621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.306 [2024-05-15 10:30:21.087216] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.306 [2024-05-15 10:30:21.095494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.306 [2024-05-15 10:30:21.096393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.097006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.306 [2024-05-15 10:30:21.097020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.306 [2024-05-15 10:30:21.097029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.306 [2024-05-15 10:30:21.097270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.306 [2024-05-15 10:30:21.097505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.306 [2024-05-15 10:30:21.097515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.306 [2024-05-15 10:30:21.097522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.570 [2024-05-15 10:30:21.101120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.570 [2024-05-15 10:30:21.109385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.570 [2024-05-15 10:30:21.110281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.570 [2024-05-15 10:30:21.110873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.570 [2024-05-15 10:30:21.110887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.570 [2024-05-15 10:30:21.110896] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.570 [2024-05-15 10:30:21.111137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.570 [2024-05-15 10:30:21.111371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.570 [2024-05-15 10:30:21.111381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.570 [2024-05-15 10:30:21.111388] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.570 [2024-05-15 10:30:21.114979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.570 [2024-05-15 10:30:21.123240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.570 [2024-05-15 10:30:21.124138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.570 [2024-05-15 10:30:21.124699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.570 [2024-05-15 10:30:21.124714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.570 [2024-05-15 10:30:21.124724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.570 [2024-05-15 10:30:21.124966] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.570 [2024-05-15 10:30:21.125192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.570 [2024-05-15 10:30:21.125200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.570 [2024-05-15 10:30:21.125208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.570 [2024-05-15 10:30:21.128807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.570 [2024-05-15 10:30:21.137279] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.570 [2024-05-15 10:30:21.138132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.570 [2024-05-15 10:30:21.138745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.570 [2024-05-15 10:30:21.138760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.570 [2024-05-15 10:30:21.138770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.570 [2024-05-15 10:30:21.139011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.570 [2024-05-15 10:30:21.139237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.570 [2024-05-15 10:30:21.139246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.570 [2024-05-15 10:30:21.139254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.570 [2024-05-15 10:30:21.142856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.151123] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.152023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.152631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.152646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.152656] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.152897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.153123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.153132] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.153139] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.156740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.165012] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.165914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.166520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.166534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.166544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.166785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.167011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.167020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.167027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.170628] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.178888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.179691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.180302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.180321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.180331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.180572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.180798] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.180807] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.180815] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.184414] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.192880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.193740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.194308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.194324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.194334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.194575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.194801] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.194810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.194818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.198413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.206883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.207786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.208157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.208170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.208180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.208429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.208656] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.208665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.208674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.212268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.220739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.221637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.222241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.222254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.222268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.222518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.222744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.222753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.222761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.226355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.234617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.235459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.236067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.236080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.236090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.236341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.236568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.236576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.236584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.240178] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.248653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.249472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.250006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.250020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.250029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.250270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.250507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.250517] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.250525] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.254117] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.262597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.263489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.264051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.264064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.264074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.264329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.264555] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.571 [2024-05-15 10:30:21.264564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.571 [2024-05-15 10:30:21.264571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.571 [2024-05-15 10:30:21.268167] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.571 [2024-05-15 10:30:21.276636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.571 [2024-05-15 10:30:21.277529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.277848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.571 [2024-05-15 10:30:21.277869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.571 [2024-05-15 10:30:21.277879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.571 [2024-05-15 10:30:21.278120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.571 [2024-05-15 10:30:21.278356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.278366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.278373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.572 [2024-05-15 10:30:21.281966] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.572 [2024-05-15 10:30:21.290645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.572 [2024-05-15 10:30:21.291541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.292066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.292080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.572 [2024-05-15 10:30:21.292089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.572 [2024-05-15 10:30:21.292341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.572 [2024-05-15 10:30:21.292567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.292576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.292584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.572 [2024-05-15 10:30:21.296179] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.572 [2024-05-15 10:30:21.304652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.572 [2024-05-15 10:30:21.305548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.306122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.306136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.572 [2024-05-15 10:30:21.306145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.572 [2024-05-15 10:30:21.306395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.572 [2024-05-15 10:30:21.306626] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.306635] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.306642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.572 [2024-05-15 10:30:21.310239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.572 [2024-05-15 10:30:21.318498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.572 [2024-05-15 10:30:21.319393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.319999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.320013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.572 [2024-05-15 10:30:21.320023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.572 [2024-05-15 10:30:21.320264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.572 [2024-05-15 10:30:21.320498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.320508] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.320516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.572 [2024-05-15 10:30:21.324110] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.572 [2024-05-15 10:30:21.332371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.572 [2024-05-15 10:30:21.333268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.333863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.333877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.572 [2024-05-15 10:30:21.333887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.572 [2024-05-15 10:30:21.334128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.572 [2024-05-15 10:30:21.334361] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.334371] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.334378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.572 [2024-05-15 10:30:21.337972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.572 [2024-05-15 10:30:21.346247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.572 [2024-05-15 10:30:21.347047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.347271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.347281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.572 [2024-05-15 10:30:21.347295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.572 [2024-05-15 10:30:21.347517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.572 [2024-05-15 10:30:21.347740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.347753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.347760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.572 [2024-05-15 10:30:21.351358] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.572 [2024-05-15 10:30:21.360273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.572 [2024-05-15 10:30:21.361071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.361715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.572 [2024-05-15 10:30:21.361753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.572 [2024-05-15 10:30:21.361764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.572 [2024-05-15 10:30:21.362006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.572 [2024-05-15 10:30:21.362232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.572 [2024-05-15 10:30:21.362241] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.572 [2024-05-15 10:30:21.362249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.836 [2024-05-15 10:30:21.365858] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.836 [2024-05-15 10:30:21.374128] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.836 [2024-05-15 10:30:21.374907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.375563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.375600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.836 [2024-05-15 10:30:21.375611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.836 [2024-05-15 10:30:21.375853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.836 [2024-05-15 10:30:21.376080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.836 [2024-05-15 10:30:21.376088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.836 [2024-05-15 10:30:21.376096] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.836 [2024-05-15 10:30:21.379702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.836 [2024-05-15 10:30:21.387971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.836 [2024-05-15 10:30:21.388870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.389554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.389591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.836 [2024-05-15 10:30:21.389602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.836 [2024-05-15 10:30:21.389843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.836 [2024-05-15 10:30:21.390070] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.836 [2024-05-15 10:30:21.390078] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.836 [2024-05-15 10:30:21.390092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.836 [2024-05-15 10:30:21.393690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.836 [2024-05-15 10:30:21.401961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.836 [2024-05-15 10:30:21.402806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.403518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.403556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.836 [2024-05-15 10:30:21.403567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.836 [2024-05-15 10:30:21.403808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.836 [2024-05-15 10:30:21.404034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.836 [2024-05-15 10:30:21.404043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.836 [2024-05-15 10:30:21.404050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.836 [2024-05-15 10:30:21.407645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.836 [2024-05-15 10:30:21.415907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.836 [2024-05-15 10:30:21.416799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.417492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.836 [2024-05-15 10:30:21.417530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.417541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.417782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.418008] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.418017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.418025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.421625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.429881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.430744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.431353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.431368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.431378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.431620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.431846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.431855] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.431862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.435457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.443719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.444522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.445053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.445064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.445072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.445299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.445523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.445531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.445538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.449126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.457601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.458491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.459096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.459110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.459119] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.459378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.459604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.459613] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.459621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.463217] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.471477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.472328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.472893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.472906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.472916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.473157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.473392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.473402] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.473410] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.477006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.485488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.486371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.486955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.486968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.486978] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.487219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.487451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.487461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.487469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.491066] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.499540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.500394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.500974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.500988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.500997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.501238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.501470] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.501480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.501488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.505082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.513635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.514405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.514943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.514953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.514961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.837 [2024-05-15 10:30:21.515184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.837 [2024-05-15 10:30:21.515409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.837 [2024-05-15 10:30:21.515418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.837 [2024-05-15 10:30:21.515425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.837 [2024-05-15 10:30:21.519013] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.837 [2024-05-15 10:30:21.527484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.837 [2024-05-15 10:30:21.528273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.528880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.837 [2024-05-15 10:30:21.528917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.837 [2024-05-15 10:30:21.528928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.529170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.529406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.529416] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.529424] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.533027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.541510] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.542394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.542995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.543008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.543017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.543259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.543491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.543501] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.543508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.547102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.555360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.556163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.556843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.556880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.556891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.557133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.557366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.557375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.557383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.560988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.569247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.570175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.570726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.570741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.570755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.570996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.571222] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.571231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.571239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.574842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.583110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.583782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.584385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.584400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.584409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.584651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.584876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.584885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.584892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.588488] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.596955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.597800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.598407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.598421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.598431] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.598672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.598897] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.598906] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.598913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.602512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.610982] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.611841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.612437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.612453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.612462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.612708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.612934] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.612943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.612950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:35.838 [2024-05-15 10:30:21.616552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:35.838 [2024-05-15 10:30:21.625020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:35.838 [2024-05-15 10:30:21.625799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.626464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:35.838 [2024-05-15 10:30:21.626501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:35.838 [2024-05-15 10:30:21.626512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:35.838 [2024-05-15 10:30:21.626755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:35.838 [2024-05-15 10:30:21.626981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:35.838 [2024-05-15 10:30:21.626990] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:35.838 [2024-05-15 10:30:21.626998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.630600] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.638859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.639728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.640475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.640512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.640525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.640768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.640995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.641004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.641012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.644616] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.652872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.653797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.654498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.654535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.654546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.654788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.655019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.655029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.655036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.658645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.666909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.667830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.668432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.668448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.668458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.668700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.668926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.668935] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.668943] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.672542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.680808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.681699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.682263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.682277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.682286] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.682534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.682760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.682769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.682777] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.686375] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.694847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.695650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.696217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.696230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.696240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.696488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.696715] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.696729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.696737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.700333] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.708805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.709593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.710193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.710206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.710216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.710463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.710689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.710698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.710706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.714304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.722793] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.723678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.724287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.724306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.724316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.724557] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.724784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.724793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.724800] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.728399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.736665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.737562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.738051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.738064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.738074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.738322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.102 [2024-05-15 10:30:21.738548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.102 [2024-05-15 10:30:21.738557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.102 [2024-05-15 10:30:21.738569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.102 [2024-05-15 10:30:21.742164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.102 [2024-05-15 10:30:21.750644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.102 [2024-05-15 10:30:21.751404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.751985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.102 [2024-05-15 10:30:21.751996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.102 [2024-05-15 10:30:21.752004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.102 [2024-05-15 10:30:21.752226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.752452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.752462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.752469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.756138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.764632] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.765575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.766132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.766144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.766154] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.766403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.766629] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.766637] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.766644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.770237] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.778505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.779396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.779964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.779977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.779986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.780227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.780460] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.780469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.780476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.784080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.792342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.793241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.793897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.793934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.793944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.794186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.794420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.794429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.794437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.798029] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.806287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.807179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.807755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.807770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.807780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.808021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.808246] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.808254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.808262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.811866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.820126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.820988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.821619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.821634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.821643] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.821885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.822111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.822119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.822127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.825726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.834003] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.834866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.835427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.835442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.835451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.835693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.835918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.835926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.835933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.839534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.848007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.848861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.849503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.849540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.849551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.849792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.850017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.850026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.850033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.853641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.861915] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.862809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.863370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.863384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.863393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.863635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.863859] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.863868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.863876] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.867476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.875953] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.876815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.877108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.877130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.103 [2024-05-15 10:30:21.877140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.103 [2024-05-15 10:30:21.877393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.103 [2024-05-15 10:30:21.877621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.103 [2024-05-15 10:30:21.877629] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.103 [2024-05-15 10:30:21.877636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.103 [2024-05-15 10:30:21.881232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.103 [2024-05-15 10:30:21.889924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.103 [2024-05-15 10:30:21.890816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.103 [2024-05-15 10:30:21.891497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.104 [2024-05-15 10:30:21.891535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.104 [2024-05-15 10:30:21.891546] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.104 [2024-05-15 10:30:21.891788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.104 [2024-05-15 10:30:21.892013] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.104 [2024-05-15 10:30:21.892021] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.104 [2024-05-15 10:30:21.892028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.367 [2024-05-15 10:30:21.895632] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.367 [2024-05-15 10:30:21.903890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.367 [2024-05-15 10:30:21.904776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.905496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.905533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.367 [2024-05-15 10:30:21.905544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.367 [2024-05-15 10:30:21.905786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.367 [2024-05-15 10:30:21.906012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.367 [2024-05-15 10:30:21.906020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.367 [2024-05-15 10:30:21.906027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.367 [2024-05-15 10:30:21.909627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.367 [2024-05-15 10:30:21.917885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.367 [2024-05-15 10:30:21.918770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.919404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.919423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.367 [2024-05-15 10:30:21.919433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.367 [2024-05-15 10:30:21.919675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.367 [2024-05-15 10:30:21.919900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.367 [2024-05-15 10:30:21.919908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.367 [2024-05-15 10:30:21.919915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.367 [2024-05-15 10:30:21.923514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.367 [2024-05-15 10:30:21.931771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.367 [2024-05-15 10:30:21.932662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.933245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.933258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.367 [2024-05-15 10:30:21.933267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.367 [2024-05-15 10:30:21.933513] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.367 [2024-05-15 10:30:21.933739] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.367 [2024-05-15 10:30:21.933747] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.367 [2024-05-15 10:30:21.933754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.367 [2024-05-15 10:30:21.937355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.367 [2024-05-15 10:30:21.945614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.367 [2024-05-15 10:30:21.946487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.947059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.947072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.367 [2024-05-15 10:30:21.947081] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.367 [2024-05-15 10:30:21.947330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.367 [2024-05-15 10:30:21.947556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.367 [2024-05-15 10:30:21.947564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.367 [2024-05-15 10:30:21.947571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.367 [2024-05-15 10:30:21.951166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.367 [2024-05-15 10:30:21.959648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.367 [2024-05-15 10:30:21.960600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.961159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.367 [2024-05-15 10:30:21.961171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.367 [2024-05-15 10:30:21.961185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.367 [2024-05-15 10:30:21.961434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.367 [2024-05-15 10:30:21.961660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.367 [2024-05-15 10:30:21.961667] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:21.961675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:21.965268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:21.973534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:21.974396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:21.975001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:21.975014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:21.975023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:21.975265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:21.975498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:21.975507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:21.975514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:21.979115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:21.987381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:21.988238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:21.988944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:21.988981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:21.988992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:21.989234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:21.989467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:21.989476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:21.989483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:21.993079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.001341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.002227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.003135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.003172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.003183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.003484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.003711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.003719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.003727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.007326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.015381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.016078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.016727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.016764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.016776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.017021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.017247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.017254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.017262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.020864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.029335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.030224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.030849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.030863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.030872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.031113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.031343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.031351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.031359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.034953] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.043217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.044018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.044656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.044693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.044705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.044948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.045178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.045186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.045193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.048794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.057055] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.057840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.058542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.058578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.058590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.058831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.059056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.059064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.059071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.062685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.070942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.071802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.072388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.072402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.072412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.072653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.072877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.072885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.072893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.076491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.084967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.085844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.086428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.086443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.368 [2024-05-15 10:30:22.086452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.368 [2024-05-15 10:30:22.086693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.368 [2024-05-15 10:30:22.086919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.368 [2024-05-15 10:30:22.086931] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.368 [2024-05-15 10:30:22.086938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.368 [2024-05-15 10:30:22.090541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.368 [2024-05-15 10:30:22.099018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.368 [2024-05-15 10:30:22.099688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.100271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.368 [2024-05-15 10:30:22.100284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-05-15 10:30:22.100300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.369 [2024-05-15 10:30:22.100542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.369 [2024-05-15 10:30:22.100768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-05-15 10:30:22.100775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-05-15 10:30:22.100783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-05-15 10:30:22.104381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-05-15 10:30:22.113065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-05-15 10:30:22.113978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.114172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.114184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-05-15 10:30:22.114193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.369 [2024-05-15 10:30:22.114441] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.369 [2024-05-15 10:30:22.114667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-05-15 10:30:22.114675] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-05-15 10:30:22.114682] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-05-15 10:30:22.118279] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-05-15 10:30:22.126968] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-05-15 10:30:22.127511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.128089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.128102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-05-15 10:30:22.128111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.369 [2024-05-15 10:30:22.128360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.369 [2024-05-15 10:30:22.128586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-05-15 10:30:22.128594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-05-15 10:30:22.128606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-05-15 10:30:22.132198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-05-15 10:30:22.140886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-05-15 10:30:22.141779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.142409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.142423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-05-15 10:30:22.142433] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.369 [2024-05-15 10:30:22.142674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.369 [2024-05-15 10:30:22.142899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-05-15 10:30:22.142908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-05-15 10:30:22.142915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-05-15 10:30:22.146515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.369 [2024-05-15 10:30:22.154777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.369 [2024-05-15 10:30:22.155689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.156273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.369 [2024-05-15 10:30:22.156285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.369 [2024-05-15 10:30:22.156301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.369 [2024-05-15 10:30:22.156543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.369 [2024-05-15 10:30:22.156768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.369 [2024-05-15 10:30:22.156775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.369 [2024-05-15 10:30:22.156783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.369 [2024-05-15 10:30:22.160392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.633 [2024-05-15 10:30:22.168664] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.633 [2024-05-15 10:30:22.169563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.170135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.170147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.633 [2024-05-15 10:30:22.170156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.633 [2024-05-15 10:30:22.170403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.633 [2024-05-15 10:30:22.170628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.633 [2024-05-15 10:30:22.170636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.633 [2024-05-15 10:30:22.170644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.633 [2024-05-15 10:30:22.174242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.633 [2024-05-15 10:30:22.182517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.633 [2024-05-15 10:30:22.183394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.183992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.184004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.633 [2024-05-15 10:30:22.184014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.633 [2024-05-15 10:30:22.184254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.633 [2024-05-15 10:30:22.184487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.633 [2024-05-15 10:30:22.184496] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.633 [2024-05-15 10:30:22.184504] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.633 [2024-05-15 10:30:22.188106] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.633 [2024-05-15 10:30:22.196373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.633 [2024-05-15 10:30:22.197256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.197829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.197865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.633 [2024-05-15 10:30:22.197876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.633 [2024-05-15 10:30:22.198117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.633 [2024-05-15 10:30:22.198350] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.633 [2024-05-15 10:30:22.198359] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.633 [2024-05-15 10:30:22.198366] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.633 [2024-05-15 10:30:22.201959] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.633 [2024-05-15 10:30:22.210217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.633 [2024-05-15 10:30:22.211115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.211694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.211709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.633 [2024-05-15 10:30:22.211718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.633 [2024-05-15 10:30:22.211959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.633 [2024-05-15 10:30:22.212184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.633 [2024-05-15 10:30:22.212192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.633 [2024-05-15 10:30:22.212200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.633 [2024-05-15 10:30:22.215799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.633 [2024-05-15 10:30:22.224065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.633 [2024-05-15 10:30:22.224946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.225540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.225554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.633 [2024-05-15 10:30:22.225563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.633 [2024-05-15 10:30:22.225805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.633 [2024-05-15 10:30:22.226030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.633 [2024-05-15 10:30:22.226038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.633 [2024-05-15 10:30:22.226046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.633 [2024-05-15 10:30:22.229648] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.633 [2024-05-15 10:30:22.237911] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.633 [2024-05-15 10:30:22.238774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.239510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.633 [2024-05-15 10:30:22.239547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.633 [2024-05-15 10:30:22.239558] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.633 [2024-05-15 10:30:22.239799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.633 [2024-05-15 10:30:22.240025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.633 [2024-05-15 10:30:22.240033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.633 [2024-05-15 10:30:22.240040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.633 [2024-05-15 10:30:22.243641] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.251902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.252689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.253419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.253435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.253446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.253692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.253919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.253927] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.253935] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.257542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.265820] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.266666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.267249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.267262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.267271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.267518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.267744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.267752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.267759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.271360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.279836] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.280690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.281252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.281264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.281273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.281519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.281745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.281753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.281761] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.285359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.293835] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.294728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.295455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.295492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.295504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.295749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.295975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.295984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.295991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.299596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.307854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.308717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.309302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.309316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.309329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.309570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.309796] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.309805] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.309812] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.313413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.321888] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.322758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.323356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.323371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.323380] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.323621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.323846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.323854] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.323861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.327459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.335933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.336854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.337520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.337556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.337567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.337809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.338034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.338042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.338050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.341651] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.349908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.350806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.351510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.351546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.351557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.351803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.352029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.352037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.352045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.355646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.363922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.364798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.365501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.365537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.365548] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.365790] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.366016] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.634 [2024-05-15 10:30:22.366024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.634 [2024-05-15 10:30:22.366032] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.634 [2024-05-15 10:30:22.369635] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.634 [2024-05-15 10:30:22.377897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.634 [2024-05-15 10:30:22.378762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.379350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.634 [2024-05-15 10:30:22.379364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.634 [2024-05-15 10:30:22.379374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.634 [2024-05-15 10:30:22.379615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.634 [2024-05-15 10:30:22.379840] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.635 [2024-05-15 10:30:22.379848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.635 [2024-05-15 10:30:22.379856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.635 [2024-05-15 10:30:22.383457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.635 [2024-05-15 10:30:22.391932] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.635 [2024-05-15 10:30:22.392803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.635 [2024-05-15 10:30:22.393411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.635 [2024-05-15 10:30:22.393426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.635 [2024-05-15 10:30:22.393436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.635 [2024-05-15 10:30:22.393677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.635 [2024-05-15 10:30:22.393907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.635 [2024-05-15 10:30:22.393915] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.635 [2024-05-15 10:30:22.393922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.635 [2024-05-15 10:30:22.397524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.635 [2024-05-15 10:30:22.405786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.635 [2024-05-15 10:30:22.406673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.635 [2024-05-15 10:30:22.407256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.635 [2024-05-15 10:30:22.407269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.635 [2024-05-15 10:30:22.407278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.635 [2024-05-15 10:30:22.407525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.635 [2024-05-15 10:30:22.407751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.635 [2024-05-15 10:30:22.407759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.635 [2024-05-15 10:30:22.407766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.635 [2024-05-15 10:30:22.411365] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.635 [2024-05-15 10:30:22.419626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.635 [2024-05-15 10:30:22.420480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.635 [2024-05-15 10:30:22.421064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.635 [2024-05-15 10:30:22.421076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.635 [2024-05-15 10:30:22.421086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.635 [2024-05-15 10:30:22.421332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.635 [2024-05-15 10:30:22.421558] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.635 [2024-05-15 10:30:22.421566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.635 [2024-05-15 10:30:22.421573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.635 [2024-05-15 10:30:22.425171] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.433646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.434571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.435160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.435173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.435182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.435431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.435657] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.435673] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.435680] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.439275] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.447540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.448300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.448952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.448989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.448999] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.449241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.449474] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.449483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.449490] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.453087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.461568] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.462528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.463111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.463123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.463132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.463380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.463606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.463614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.463621] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.467217] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.475479] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.476204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.476919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.476955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.476966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.477207] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.477440] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.477449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.477460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.481054] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.489317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.490219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.490901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.490938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.490948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.491189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.491421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.491430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.491437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.495029] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.503281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.504188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.504850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.504886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.504897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.505138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.505370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.505379] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.505386] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.508979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.517232] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.518145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.518755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.900 [2024-05-15 10:30:22.518768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.900 [2024-05-15 10:30:22.518777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.900 [2024-05-15 10:30:22.519018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.900 [2024-05-15 10:30:22.519243] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.900 [2024-05-15 10:30:22.519251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.900 [2024-05-15 10:30:22.519259] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.900 [2024-05-15 10:30:22.522862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.900 [2024-05-15 10:30:22.531116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.900 [2024-05-15 10:30:22.532021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.532703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.532740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.532751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.532992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.533217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.533225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.533233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.536833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.545085] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.545850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.546502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.546539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.546551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.546793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.547019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.547026] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.547034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.550627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.559095] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.559770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.560514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.560550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.560561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.560802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.561027] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.561035] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.561042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.564639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.573115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.573924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.574494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.574505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.574513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.574735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.574956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.574964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.574970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.578563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.587031] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.587885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.588569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.588606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.588617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.588858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.589083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.589091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.589098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.592698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.600950] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.601815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.602397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.602410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.602420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.602661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.602887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.602895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.602902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.606578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.614845] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.615745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.616329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.616342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.616352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.616593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.616819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.616826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.616834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.620432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.628694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.629555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.630134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.630146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.630155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.630405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.630630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.630638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.630645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.634239] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.642710] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.643603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.644194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.644206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.644215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.644465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.644691] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.644699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.644706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.901 [2024-05-15 10:30:22.648300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.901 [2024-05-15 10:30:22.656558] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.901 [2024-05-15 10:30:22.657391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.658005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.901 [2024-05-15 10:30:22.658022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.901 [2024-05-15 10:30:22.658031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.901 [2024-05-15 10:30:22.658272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.901 [2024-05-15 10:30:22.658506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.901 [2024-05-15 10:30:22.658515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.901 [2024-05-15 10:30:22.658522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.902 [2024-05-15 10:30:22.662127] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.902 [2024-05-15 10:30:22.670596] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.902 [2024-05-15 10:30:22.671508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.902 [2024-05-15 10:30:22.672089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.902 [2024-05-15 10:30:22.672102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.902 [2024-05-15 10:30:22.672111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.902 [2024-05-15 10:30:22.672359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.902 [2024-05-15 10:30:22.672584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.902 [2024-05-15 10:30:22.672592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.902 [2024-05-15 10:30:22.672599] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.902 [2024-05-15 10:30:22.676194] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:36.902 [2024-05-15 10:30:22.684448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:36.902 [2024-05-15 10:30:22.685343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.902 [2024-05-15 10:30:22.685942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:36.902 [2024-05-15 10:30:22.685955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:36.902 [2024-05-15 10:30:22.685964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:36.902 [2024-05-15 10:30:22.686205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:36.902 [2024-05-15 10:30:22.686437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:36.902 [2024-05-15 10:30:22.686446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:36.902 [2024-05-15 10:30:22.686453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:36.902 [2024-05-15 10:30:22.690052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.166 [2024-05-15 10:30:22.698312] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.166 [2024-05-15 10:30:22.699109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.699284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.699299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.166 [2024-05-15 10:30:22.699312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.166 [2024-05-15 10:30:22.699535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.166 [2024-05-15 10:30:22.699757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.166 [2024-05-15 10:30:22.699764] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.166 [2024-05-15 10:30:22.699771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.166 [2024-05-15 10:30:22.703360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.166 [2024-05-15 10:30:22.712254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.166 [2024-05-15 10:30:22.713131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.713697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.713734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.166 [2024-05-15 10:30:22.713745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.166 [2024-05-15 10:30:22.713987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.166 [2024-05-15 10:30:22.714213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.166 [2024-05-15 10:30:22.714221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.166 [2024-05-15 10:30:22.714228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.166 [2024-05-15 10:30:22.717827] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.166 [2024-05-15 10:30:22.726301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.166 [2024-05-15 10:30:22.727005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.727697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.727733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.166 [2024-05-15 10:30:22.727744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.166 [2024-05-15 10:30:22.727985] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.166 [2024-05-15 10:30:22.728210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.166 [2024-05-15 10:30:22.728218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.166 [2024-05-15 10:30:22.728225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.166 [2024-05-15 10:30:22.731825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.166 [2024-05-15 10:30:22.740293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.166 [2024-05-15 10:30:22.741153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.741718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.166 [2024-05-15 10:30:22.741732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.166 [2024-05-15 10:30:22.741742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.166 [2024-05-15 10:30:22.741987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.166 [2024-05-15 10:30:22.742213] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.166 [2024-05-15 10:30:22.742221] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.742228] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.745828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.754300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.755195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.755653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.755667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.755676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.755917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.756142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.756150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.756158] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.759765] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.768233] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.769155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.769789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.769803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.769813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.770053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.770278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.770286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.770298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.773893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.782261] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.783109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.783691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.783705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.783714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.783955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.784184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.784192] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.784200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.787800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.796267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.797151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.797728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.797741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.797751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.797992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.798217] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.798225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.798232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.801830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.810306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.811194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.811813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.811827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.811836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.812077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.812307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.812315] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.812322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.815918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.824175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.825051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.825636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.825649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.825658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.825900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.826125] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.826137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.826144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.829741] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.838210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.839063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.839646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.839659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.839669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.839910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.840135] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.840143] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.840150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.843749] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.852216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.853112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.853690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.853704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.853714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.853955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.854180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.854188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.854195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.857793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.866062] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.866939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.867512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.867548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.867559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.167 [2024-05-15 10:30:22.867800] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.167 [2024-05-15 10:30:22.868026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.167 [2024-05-15 10:30:22.868034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.167 [2024-05-15 10:30:22.868045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.167 [2024-05-15 10:30:22.871646] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.167 [2024-05-15 10:30:22.880110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.167 [2024-05-15 10:30:22.881002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.881681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.167 [2024-05-15 10:30:22.881717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.167 [2024-05-15 10:30:22.881728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.168 [2024-05-15 10:30:22.881969] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.168 [2024-05-15 10:30:22.882195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.168 [2024-05-15 10:30:22.882203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.168 [2024-05-15 10:30:22.882210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.168 [2024-05-15 10:30:22.885807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.168 [2024-05-15 10:30:22.894070] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.168 [2024-05-15 10:30:22.894864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.895559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.895595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.168 [2024-05-15 10:30:22.895606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.168 [2024-05-15 10:30:22.895847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.168 [2024-05-15 10:30:22.896072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.168 [2024-05-15 10:30:22.896080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.168 [2024-05-15 10:30:22.896087] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.168 [2024-05-15 10:30:22.899686] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.168 [2024-05-15 10:30:22.907955] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.168 [2024-05-15 10:30:22.908687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.909268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.909280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.168 [2024-05-15 10:30:22.909289] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.168 [2024-05-15 10:30:22.909539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.168 [2024-05-15 10:30:22.909764] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.168 [2024-05-15 10:30:22.909772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.168 [2024-05-15 10:30:22.909779] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.168 [2024-05-15 10:30:22.913381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.168 [2024-05-15 10:30:22.921852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.168 [2024-05-15 10:30:22.922740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.923320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.923334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.168 [2024-05-15 10:30:22.923343] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.168 [2024-05-15 10:30:22.923585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.168 [2024-05-15 10:30:22.923810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.168 [2024-05-15 10:30:22.923818] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.168 [2024-05-15 10:30:22.923825] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.168 [2024-05-15 10:30:22.927423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.168 [2024-05-15 10:30:22.935891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.168 [2024-05-15 10:30:22.936788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.937371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.937386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.168 [2024-05-15 10:30:22.937395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.168 [2024-05-15 10:30:22.937636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.168 [2024-05-15 10:30:22.937861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.168 [2024-05-15 10:30:22.937869] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.168 [2024-05-15 10:30:22.937877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.168 [2024-05-15 10:30:22.941484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.168 [2024-05-15 10:30:22.949755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.168 [2024-05-15 10:30:22.950644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.951233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.168 [2024-05-15 10:30:22.951246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.168 [2024-05-15 10:30:22.951255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.168 [2024-05-15 10:30:22.951502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.168 [2024-05-15 10:30:22.951728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.168 [2024-05-15 10:30:22.951736] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.168 [2024-05-15 10:30:22.951743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.168 [2024-05-15 10:30:22.955337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:22.963815] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:22.964709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:22.965301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:22.965314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:22.965324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:22.965565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:22.965790] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:22.965798] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:22.965805] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:22.969405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:22.977671] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:22.978562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:22.979138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:22.979151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:22.979160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:22.979407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:22.979633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:22.979641] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:22.979648] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:22.983244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:22.991709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:22.992476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:22.993060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:22.993073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:22.993082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:22.993331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:22.993556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:22.993564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:22.993571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:22.997163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:23.005837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:23.006738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.007515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.007552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:23.007563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:23.007804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:23.008029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:23.008037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:23.008044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:23.011645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:23.019694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:23.020599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.021173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.021185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:23.021194] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:23.021443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:23.021669] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:23.021677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:23.021684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:23.025277] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:23.033534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:23.034451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.035030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.035043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:23.035052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:23.035301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:23.035527] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:23.035535] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:23.035542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:23.039138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:23.047396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:23.048271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.048838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.048874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:23.048888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:23.049130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:23.049364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.432 [2024-05-15 10:30:23.049373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.432 [2024-05-15 10:30:23.049380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.432 [2024-05-15 10:30:23.052973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.432 [2024-05-15 10:30:23.061448] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.432 [2024-05-15 10:30:23.062313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.062907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.432 [2024-05-15 10:30:23.062919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.432 [2024-05-15 10:30:23.062928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.432 [2024-05-15 10:30:23.063168] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.432 [2024-05-15 10:30:23.063401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.063410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.063417] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.067010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.075486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.076262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.076911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.076947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.076958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.077199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.077432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.077441] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.077448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.081041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.089507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.090418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.091006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.091018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.091027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.091275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.091509] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.091518] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.091526] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.095118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.103373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.104039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.104771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.104807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.104818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.105059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.105284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.105299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.105308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.108896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.117362] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.118299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.118950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.118962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.118971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.119212] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.119444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.119453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.119460] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.123058] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.131316] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.132114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.132760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.132796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.132807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.133048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.133278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.133286] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.133301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.136893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.145151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.146021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.146696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.146732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.146743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.146984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.147209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.147217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.147225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.150822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.159078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.159988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.160653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.160689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.160699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.433 [2024-05-15 10:30:23.160941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.433 [2024-05-15 10:30:23.161165] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.433 [2024-05-15 10:30:23.161173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.433 [2024-05-15 10:30:23.161181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.433 [2024-05-15 10:30:23.164793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.433 [2024-05-15 10:30:23.173047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.433 [2024-05-15 10:30:23.173952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.174629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.433 [2024-05-15 10:30:23.174665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.433 [2024-05-15 10:30:23.174676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.434 [2024-05-15 10:30:23.174917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.434 [2024-05-15 10:30:23.175142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.434 [2024-05-15 10:30:23.175154] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.434 [2024-05-15 10:30:23.175162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.434 [2024-05-15 10:30:23.178764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.434 [2024-05-15 10:30:23.187034] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.434 [2024-05-15 10:30:23.187939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.434 [2024-05-15 10:30:23.188522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.434 [2024-05-15 10:30:23.188537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.434 [2024-05-15 10:30:23.188547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.434 [2024-05-15 10:30:23.188788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.434 [2024-05-15 10:30:23.189014] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.434 [2024-05-15 10:30:23.189022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.434 [2024-05-15 10:30:23.189029] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.434 [2024-05-15 10:30:23.192634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.434 [2024-05-15 10:30:23.200906] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.434 [2024-05-15 10:30:23.201794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.434 [2024-05-15 10:30:23.202450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.434 [2024-05-15 10:30:23.202486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.434 [2024-05-15 10:30:23.202497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.434 [2024-05-15 10:30:23.202738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.434 [2024-05-15 10:30:23.202963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.434 [2024-05-15 10:30:23.202971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.434 [2024-05-15 10:30:23.202979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.434 [2024-05-15 10:30:23.206584] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.434 [2024-05-15 10:30:23.214848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.434 [2024-05-15 10:30:23.215747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.434 [2024-05-15 10:30:23.216335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.434 [2024-05-15 10:30:23.216349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.434 [2024-05-15 10:30:23.216358] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.434 [2024-05-15 10:30:23.216599] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.434 [2024-05-15 10:30:23.216824] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.434 [2024-05-15 10:30:23.216832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.434 [2024-05-15 10:30:23.216844] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.434 [2024-05-15 10:30:23.220445] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.697 [2024-05-15 10:30:23.228717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.697 [2024-05-15 10:30:23.229514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.230076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.230085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.697 [2024-05-15 10:30:23.230093] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.697 [2024-05-15 10:30:23.230321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.697 [2024-05-15 10:30:23.230544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.697 [2024-05-15 10:30:23.230551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.697 [2024-05-15 10:30:23.230558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.697 [2024-05-15 10:30:23.234150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.697 [2024-05-15 10:30:23.242624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.697 [2024-05-15 10:30:23.243569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.244092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.244104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.697 [2024-05-15 10:30:23.244114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.697 [2024-05-15 10:30:23.244363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.697 [2024-05-15 10:30:23.244589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.697 [2024-05-15 10:30:23.244597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.697 [2024-05-15 10:30:23.244604] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.697 [2024-05-15 10:30:23.248199] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.697 [2024-05-15 10:30:23.256668] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.697 [2024-05-15 10:30:23.257562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.257969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.257982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.697 [2024-05-15 10:30:23.257991] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.697 [2024-05-15 10:30:23.258232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.697 [2024-05-15 10:30:23.258464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.697 [2024-05-15 10:30:23.258473] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.697 [2024-05-15 10:30:23.258481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.697 [2024-05-15 10:30:23.262078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.697 [2024-05-15 10:30:23.270557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.697 [2024-05-15 10:30:23.271448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.272033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.272045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.697 [2024-05-15 10:30:23.272055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.697 [2024-05-15 10:30:23.272306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.697 [2024-05-15 10:30:23.272533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.697 [2024-05-15 10:30:23.272541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.697 [2024-05-15 10:30:23.272548] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.697 [2024-05-15 10:30:23.276146] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.697 [2024-05-15 10:30:23.284407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.697 [2024-05-15 10:30:23.285270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.285886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.697 [2024-05-15 10:30:23.285899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.697 [2024-05-15 10:30:23.285908] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.286149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.286381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.286391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.286398] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.289993] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.298247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.299143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.299731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.299745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.299754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.299996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.300220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.300229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.300236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.303834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.312096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.312970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.313562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.313576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.313585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.313826] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.314051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.314059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.314066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.317661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.326125] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.327021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.327707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.327743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.327754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.327995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.328220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.328228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.328236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.331842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.340097] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.340870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.341554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.341591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.341601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.341843] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.342068] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.342076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.342083] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.345679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.354137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.355017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.355695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.355732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.355742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.355984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.356209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.356217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.356224] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.359820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.368087] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.368885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.369553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.369589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.369600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.369841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.370066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.370074] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.370081] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.373676] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.381928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.382734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.383298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.383309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.383316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.383539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.383760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.383768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.383774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.387366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.395831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.396678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.397269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.397286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.397302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.397544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.397768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.397777] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.397784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.401379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 [2024-05-15 10:30:23.409863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.698 [2024-05-15 10:30:23.410691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.411275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.698 [2024-05-15 10:30:23.411287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.698 [2024-05-15 10:30:23.411305] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.698 [2024-05-15 10:30:23.411547] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.698 [2024-05-15 10:30:23.411772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.698 [2024-05-15 10:30:23.411781] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.698 [2024-05-15 10:30:23.411789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.698 [2024-05-15 10:30:23.415383] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3084044 Killed "${NVMF_APP[@]}" "$@" 00:36:37.698 10:30:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:37.698 10:30:23 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:37.699 [2024-05-15 10:30:23.423853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.699 [2024-05-15 10:30:23.424761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.425524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.425561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.699 [2024-05-15 10:30:23.425572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.699 [2024-05-15 10:30:23.425813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3085548 00:36:37.699 [2024-05-15 10:30:23.426039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.699 [2024-05-15 10:30:23.426048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.699 [2024-05-15 10:30:23.426056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3085548 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # '[' -z 3085548 ']' 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:37.699 10:30:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:37.699 [2024-05-15 10:30:23.429662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.699 [2024-05-15 10:30:23.437717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.699 [2024-05-15 10:30:23.438574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.439183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.439196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.699 [2024-05-15 10:30:23.439206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.699 [2024-05-15 10:30:23.439453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.699 [2024-05-15 10:30:23.439679] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.699 [2024-05-15 10:30:23.439688] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.699 [2024-05-15 10:30:23.439695] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.699 [2024-05-15 10:30:23.443286] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.699 [2024-05-15 10:30:23.451571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.699 [2024-05-15 10:30:23.452091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.452763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.452800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.699 [2024-05-15 10:30:23.452811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.699 [2024-05-15 10:30:23.453053] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.699 [2024-05-15 10:30:23.453278] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.699 [2024-05-15 10:30:23.453285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.699 [2024-05-15 10:30:23.453300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.699 [2024-05-15 10:30:23.456902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.699 [2024-05-15 10:30:23.465609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.699 [2024-05-15 10:30:23.466094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.466580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.466621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.699 [2024-05-15 10:30:23.466632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.699 [2024-05-15 10:30:23.466874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.699 [2024-05-15 10:30:23.467099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.699 [2024-05-15 10:30:23.467108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.699 [2024-05-15 10:30:23.467116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.699 [2024-05-15 10:30:23.470723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.699 [2024-05-15 10:30:23.475922] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:37.699 [2024-05-15 10:30:23.475967] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.699 [2024-05-15 10:30:23.479633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.699 [2024-05-15 10:30:23.480588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.481197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.699 [2024-05-15 10:30:23.481210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.699 [2024-05-15 10:30:23.481220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.699 [2024-05-15 10:30:23.481468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.699 [2024-05-15 10:30:23.481694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.699 [2024-05-15 10:30:23.481702] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.699 [2024-05-15 10:30:23.481710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.699 [2024-05-15 10:30:23.485307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.493574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.494405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.495007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.495020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.495030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.495271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.495504] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.495514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.495521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.499115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.507613] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.508555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.963 [2024-05-15 10:30:23.509181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.509194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.509203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.509451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.509677] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.509685] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.509692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.513313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.521583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.522395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.522995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.523007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.523017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.523258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.523491] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.523500] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.523508] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.527104] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.535577] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.536235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.536595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.536632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.536644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.536885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.537111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.537119] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.537127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.540736] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.549425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.550338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.550816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.550829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.550839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.551080] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.551314] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.551324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.551331] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.554927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.559259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:37.963 [2024-05-15 10:30:23.563424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.564184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.564877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.564913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.564924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.565167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.565400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.565409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.565416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.569027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.577310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.963 [2024-05-15 10:30:23.578139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.578895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.963 [2024-05-15 10:30:23.578934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.963 [2024-05-15 10:30:23.578947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.963 [2024-05-15 10:30:23.579195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.963 [2024-05-15 10:30:23.579427] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.963 [2024-05-15 10:30:23.579436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.963 [2024-05-15 10:30:23.579444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.963 [2024-05-15 10:30:23.583045] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.963 [2024-05-15 10:30:23.587836] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.963 [2024-05-15 10:30:23.587860] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.963 [2024-05-15 10:30:23.587866] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.963 [2024-05-15 10:30:23.587875] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.963 [2024-05-15 10:30:23.587879] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.963 [2024-05-15 10:30:23.588071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:37.963 [2024-05-15 10:30:23.588193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.964 [2024-05-15 10:30:23.588194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.964 [2024-05-15 10:30:23.591314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.592154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.592715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.592753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.592766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.593009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.593236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.593244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.593251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.596866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.605348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.606053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.606728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.606766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.606777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.607021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.607247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.607256] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.607264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.610902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.619381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.620227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.620865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.620902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.620914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.621157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.621389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.621404] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.621412] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.625007] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.633272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.634108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.634753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.634790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.634802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.635044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.635269] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.635277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.635285] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.638886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.647154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.647976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.648663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.648699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.648711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.648952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.649178] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.649186] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.649194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.652800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.661058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.661895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.662526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.662563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.662574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.662815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.663039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.663047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.663060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.666662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.674927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.675752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.676506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.676542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.676553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.676795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.677020] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.677028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.677036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.680636] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.688893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.689670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.690222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.690232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.690239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.690466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.690688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.690696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.690703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.694296] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.702884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.703792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.704485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.704521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.704532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.704774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.964 [2024-05-15 10:30:23.704999] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.964 [2024-05-15 10:30:23.705007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.964 [2024-05-15 10:30:23.705015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.964 [2024-05-15 10:30:23.708629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.964 [2024-05-15 10:30:23.716890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.964 [2024-05-15 10:30:23.717763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.718488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.964 [2024-05-15 10:30:23.718525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.964 [2024-05-15 10:30:23.718537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.964 [2024-05-15 10:30:23.718782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.965 [2024-05-15 10:30:23.719007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.965 [2024-05-15 10:30:23.719016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.965 [2024-05-15 10:30:23.719024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.965 [2024-05-15 10:30:23.722633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.965 [2024-05-15 10:30:23.730886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.965 [2024-05-15 10:30:23.731413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.965 [2024-05-15 10:30:23.731975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.965 [2024-05-15 10:30:23.731985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.965 [2024-05-15 10:30:23.731993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.965 [2024-05-15 10:30:23.732220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.965 [2024-05-15 10:30:23.732447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.965 [2024-05-15 10:30:23.732455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.965 [2024-05-15 10:30:23.732462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.965 [2024-05-15 10:30:23.736053] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:37.965 [2024-05-15 10:30:23.744776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:37.965 [2024-05-15 10:30:23.745130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.965 [2024-05-15 10:30:23.745617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:37.965 [2024-05-15 10:30:23.745628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:37.965 [2024-05-15 10:30:23.745635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:37.965 [2024-05-15 10:30:23.745857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:37.965 [2024-05-15 10:30:23.746078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:37.965 [2024-05-15 10:30:23.746085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:37.965 [2024-05-15 10:30:23.746092] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:37.965 [2024-05-15 10:30:23.749684] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.758799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.759568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.760143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.760154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.760161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.760386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.760608] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.760615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.760622] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.764225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.772690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.773586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.774155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.774168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.774178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.774425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.774650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.774659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.774666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.778264] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.786531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.787504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.788128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.788141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.788150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.788397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.788623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.788632] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.788639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.792233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.800502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.801172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.801856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.801893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.801904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.802146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.802378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.802388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.802395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.805986] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.814457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.815125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.815809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.815845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.815856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.816097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.816329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.816339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.816346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.819946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.828419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.829248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.829884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.829921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.829931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.830173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.830404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.830414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.830421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.834016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.842274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.842888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.843160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.843184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.843192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.843423] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.843647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.843656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.843663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.847251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.856148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.856974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.857555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.857593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.857605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.857848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.858073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.858081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.858089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.861696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.870193] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.870873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.871480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.871494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.871504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.871745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.871970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.871978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.871985] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.875583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.884053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.228 [2024-05-15 10:30:23.884846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.885521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.228 [2024-05-15 10:30:23.885557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.228 [2024-05-15 10:30:23.885572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.228 [2024-05-15 10:30:23.885815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.228 [2024-05-15 10:30:23.886040] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.228 [2024-05-15 10:30:23.886048] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.228 [2024-05-15 10:30:23.886056] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.228 [2024-05-15 10:30:23.889657] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.228 [2024-05-15 10:30:23.897919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.898831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.899318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.899341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.899351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.899592] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.899817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.899825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.899833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.903439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.911909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.912814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.913250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.913263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.913272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.913519] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.913744] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.913752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.913760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.917360] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.925830] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.926335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.926896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.926906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.926913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.927144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.927371] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.927380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.927387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.930980] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.939871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.940731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.941319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.941342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.941351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.941593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.941818] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.941827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.941834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.945434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.953909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.954587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.955166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.955178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.955185] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.955412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.955634] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.955642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.955649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.959243] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.967936] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.968819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.969287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.969305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.969315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.969556] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.969785] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.969793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.969801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.973398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.981876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.982743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.983479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.983516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.983527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.983769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.983994] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.984002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.984010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:23.987614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:23.995878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:23.996513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.997079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:23.997092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:23.997102] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:23.997348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:23.997573] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:23.997581] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:23.997589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:24.001182] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.229 [2024-05-15 10:30:24.009884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.229 [2024-05-15 10:30:24.010788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:24.011355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.229 [2024-05-15 10:30:24.011369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.229 [2024-05-15 10:30:24.011378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.229 [2024-05-15 10:30:24.011620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.229 [2024-05-15 10:30:24.011845] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.229 [2024-05-15 10:30:24.011853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.229 [2024-05-15 10:30:24.011865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.229 [2024-05-15 10:30:24.015464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.492 [2024-05-15 10:30:24.023939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.492 [2024-05-15 10:30:24.024843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.025522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.025559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.492 [2024-05-15 10:30:24.025570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.492 [2024-05-15 10:30:24.025811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.492 [2024-05-15 10:30:24.026037] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.492 [2024-05-15 10:30:24.026045] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.492 [2024-05-15 10:30:24.026053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.492 [2024-05-15 10:30:24.029658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.492 [2024-05-15 10:30:24.037917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.492 [2024-05-15 10:30:24.038801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.039485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.039521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.492 [2024-05-15 10:30:24.039532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.492 [2024-05-15 10:30:24.039774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.492 [2024-05-15 10:30:24.040000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.492 [2024-05-15 10:30:24.040008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.492 [2024-05-15 10:30:24.040015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.492 [2024-05-15 10:30:24.043620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.492 [2024-05-15 10:30:24.051884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.492 [2024-05-15 10:30:24.052785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.053221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.053235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.492 [2024-05-15 10:30:24.053244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.492 [2024-05-15 10:30:24.053491] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.492 [2024-05-15 10:30:24.053717] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.492 [2024-05-15 10:30:24.053725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.492 [2024-05-15 10:30:24.053737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.492 [2024-05-15 10:30:24.057338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.492 [2024-05-15 10:30:24.065825] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.492 [2024-05-15 10:30:24.066692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.067256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.067269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.492 [2024-05-15 10:30:24.067278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.492 [2024-05-15 10:30:24.067525] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.492 [2024-05-15 10:30:24.067751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.492 [2024-05-15 10:30:24.067760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.492 [2024-05-15 10:30:24.067768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.492 [2024-05-15 10:30:24.071368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.492 [2024-05-15 10:30:24.079839] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.492 [2024-05-15 10:30:24.080585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.080965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.492 [2024-05-15 10:30:24.080978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.492 [2024-05-15 10:30:24.080987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.492 [2024-05-15 10:30:24.081227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.081459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.081469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.081476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.085073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.093766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.094700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.095279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.095301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.095311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.095552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.095777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.095785] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.095792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.099389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.107658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.108525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.109123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.109136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.109145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.109393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.109619] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.109627] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.109635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.113234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.121504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.122125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.122803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.122840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.122851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.123093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.123324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.123333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.123340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.126933] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.135409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.136110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.136592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.136629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.136640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.136881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.137107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.137114] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.137122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.140717] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.149409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.149920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.150583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.150620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.150631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.150872] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.151099] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.151107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.151115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.154721] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.163421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.163945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.164593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.164630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.164640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.164882] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.165107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.165115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.165123] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.168730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.177420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.178251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.178792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.178803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.178810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.179033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.179255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.179263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.179271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.182861] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.191334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.192104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.192773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.192809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.192821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.193062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.193288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.193305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.193313] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.196906] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.205171] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.205965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.206615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.206651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.493 [2024-05-15 10:30:24.206662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.493 [2024-05-15 10:30:24.206903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.493 [2024-05-15 10:30:24.207128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.493 [2024-05-15 10:30:24.207137] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.493 [2024-05-15 10:30:24.207144] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.493 [2024-05-15 10:30:24.210749] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.493 [2024-05-15 10:30:24.219217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.493 [2024-05-15 10:30:24.219677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.493 [2024-05-15 10:30:24.220249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.220259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.494 [2024-05-15 10:30:24.220266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.494 [2024-05-15 10:30:24.220498] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.494 [2024-05-15 10:30:24.220722] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.494 [2024-05-15 10:30:24.220729] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.494 [2024-05-15 10:30:24.220736] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.494 [2024-05-15 10:30:24.224323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.494 [2024-05-15 10:30:24.233211] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.494 [2024-05-15 10:30:24.234030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.234264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.234273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.494 [2024-05-15 10:30:24.234285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.494 [2024-05-15 10:30:24.234512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.494 [2024-05-15 10:30:24.234733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.494 [2024-05-15 10:30:24.234741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.494 [2024-05-15 10:30:24.234747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.494 [2024-05-15 10:30:24.238335] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.494 [2024-05-15 10:30:24.247219] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.494 [2024-05-15 10:30:24.247925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.248434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.248448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.494 [2024-05-15 10:30:24.248458] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.494 [2024-05-15 10:30:24.248700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.494 [2024-05-15 10:30:24.248926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.494 [2024-05-15 10:30:24.248934] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.494 [2024-05-15 10:30:24.248941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.494 [2024-05-15 10:30:24.252541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.494 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:38.494 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@861 -- # return 0 00:36:38.494 10:30:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:38.494 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:38.494 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.494 [2024-05-15 10:30:24.261226] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.494 [2024-05-15 10:30:24.262133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.262717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.262732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.494 [2024-05-15 10:30:24.262742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.494 [2024-05-15 10:30:24.262984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.494 [2024-05-15 10:30:24.263209] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.494 [2024-05-15 10:30:24.263218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.494 [2024-05-15 10:30:24.263226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.494 [2024-05-15 10:30:24.266830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.494 [2024-05-15 10:30:24.275088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.494 [2024-05-15 10:30:24.275898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.276583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.494 [2024-05-15 10:30:24.276620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.494 [2024-05-15 10:30:24.276631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.494 [2024-05-15 10:30:24.276873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.494 [2024-05-15 10:30:24.277098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.494 [2024-05-15 10:30:24.277106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.494 [2024-05-15 10:30:24.277114] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.494 [2024-05-15 10:30:24.280715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.756 [2024-05-15 10:30:24.288980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.756 [2024-05-15 10:30:24.289702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.756 [2024-05-15 10:30:24.289950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.756 [2024-05-15 10:30:24.289961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.756 [2024-05-15 10:30:24.289969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.756 [2024-05-15 10:30:24.290192] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.756 [2024-05-15 10:30:24.290418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.756 [2024-05-15 10:30:24.290427] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.756 [2024-05-15 10:30:24.290435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.756 [2024-05-15 10:30:24.294028] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.756 [2024-05-15 10:30:24.301446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.756 [2024-05-15 10:30:24.302920] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.756 [2024-05-15 10:30:24.303723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.756 [2024-05-15 10:30:24.304270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.756 [2024-05-15 10:30:24.304279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.756 [2024-05-15 10:30:24.304287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.756 [2024-05-15 10:30:24.304512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.756 [2024-05-15 10:30:24.304733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.756 [2024-05-15 10:30:24.304741] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.756 [2024-05-15 10:30:24.304747] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.756 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.756 [2024-05-15 10:30:24.308342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.756 [2024-05-15 10:30:24.316811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.756 [2024-05-15 10:30:24.317705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.318275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.318287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.757 [2024-05-15 10:30:24.318302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.757 [2024-05-15 10:30:24.318544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.757 [2024-05-15 10:30:24.318768] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.757 [2024-05-15 10:30:24.318776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.757 [2024-05-15 10:30:24.318783] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.757 [2024-05-15 10:30:24.322381] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.757 [2024-05-15 10:30:24.330655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.757 [2024-05-15 10:30:24.331564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.332128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.332140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.757 [2024-05-15 10:30:24.332149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.757 [2024-05-15 10:30:24.332397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.757 [2024-05-15 10:30:24.332623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.757 [2024-05-15 10:30:24.332631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.757 [2024-05-15 10:30:24.332638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.757 [2024-05-15 10:30:24.336230] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.757 Malloc0 00:36:38.757 [2024-05-15 10:30:24.344495] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.757 [2024-05-15 10:30:24.345334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.757 [2024-05-15 10:30:24.345883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.345893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.757 [2024-05-15 10:30:24.345901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.757 [2024-05-15 10:30:24.346132] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.757 [2024-05-15 10:30:24.346360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.757 [2024-05-15 10:30:24.346368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.757 [2024-05-15 10:30:24.346375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.757 [2024-05-15 10:30:24.349965] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.757 [2024-05-15 10:30:24.358433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.757 [2024-05-15 10:30:24.359254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.359639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.757 [2024-05-15 10:30:24.359676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd96c0 with addr=10.0.0.2, port=4420 00:36:38.757 [2024-05-15 10:30:24.359688] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd96c0 is same with the state(5) to be set 00:36:38.757 [2024-05-15 10:30:24.359933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd96c0 (9): Bad file descriptor 00:36:38.757 [2024-05-15 10:30:24.360158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:38.757 [2024-05-15 10:30:24.360166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:38.757 [2024-05-15 10:30:24.360174] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:38.757 [2024-05-15 10:30:24.363786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.757 [2024-05-15 10:30:24.371383] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:38.757 [2024-05-15 10:30:24.371559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.757 [2024-05-15 10:30:24.372475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:38.757 10:30:24 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3084524 00:36:38.757 [2024-05-15 10:30:24.420106] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:36:48.771 00:36:48.771 Latency(us) 00:36:48.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.771 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:48.771 Verification LBA range: start 0x0 length 0x4000 00:36:48.771 Nvme1n1 : 15.01 8323.01 32.51 9067.38 0.00 7334.32 1645.23 28617.39 00:36:48.771 =================================================================================================================== 00:36:48.771 Total : 8323.01 32.51 9067.38 0.00 7334.32 1645.23 28617.39 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:48.771 rmmod nvme_tcp 00:36:48.771 rmmod nvme_fabrics 00:36:48.771 rmmod nvme_keyring 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3085548 ']' 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3085548 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' -z 3085548 ']' 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # kill -0 3085548 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # uname 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:36:48.771 10:30:32 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3085548 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3085548' 00:36:48.771 killing process with pid 3085548 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # kill 3085548 00:36:48.771 [2024-05-15 10:30:33.029688] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@971 -- # wait 3085548 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:48.771 10:30:33 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.718 10:30:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:49.718 00:36:49.718 real 0m27.596s 00:36:49.718 user 1m2.573s 00:36:49.718 sys 0m6.966s 00:36:49.718 10:30:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:49.718 10:30:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:49.718 ************************************ 00:36:49.718 END TEST nvmf_bdevperf 00:36:49.718 ************************************ 00:36:49.718 10:30:35 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:49.718 10:30:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:36:49.718 10:30:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:49.718 10:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:49.718 ************************************ 00:36:49.718 START TEST nvmf_target_disconnect 00:36:49.718 ************************************ 00:36:49.718 10:30:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:49.718 * Looking for test storage... 00:36:49.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:49.718 10:30:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:49.718 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:49.718 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:49.718 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:49.718 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:36:49.719 10:30:35 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:56.391 10:30:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:56.391 10:30:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:36:56.391 10:30:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:56.391 10:30:41 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:56.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:56.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.391 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:56.392 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:56.392 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:56.392 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:56.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:56.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:36:56.654 00:36:56.654 --- 10.0.0.2 ping statistics --- 00:36:56.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.654 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:56.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:56.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.466 ms 00:36:56.654 00:36:56.654 --- 10.0.0.1 ping statistics --- 00:36:56.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:56.654 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:56.654 10:30:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:56.654 ************************************ 00:36:56.654 START TEST nvmf_target_disconnect_tc1 00:36:56.654 ************************************ 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc1 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@649 -- # local es=0 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:56.655 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:56.917 EAL: No free 2048 kB hugepages reported on node 1 00:36:56.917 [2024-05-15 10:30:42.490403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.917 [2024-05-15 10:30:42.490813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.917 [2024-05-15 10:30:42.490828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8aa230 with addr=10.0.0.2, port=4420 00:36:56.917 [2024-05-15 10:30:42.490849] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:56.917 [2024-05-15 10:30:42.490858] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:56.917 [2024-05-15 10:30:42.490865] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:36:56.917 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:56.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:56.917 Initializing NVMe Controllers 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # es=1 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:36:56.917 00:36:56.917 real 0m0.104s 00:36:56.917 user 0m0.044s 00:36:56.917 sys 0m0.059s 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:56.917 ************************************ 00:36:56.917 END TEST nvmf_target_disconnect_tc1 00:36:56.917 ************************************ 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1104 -- # xtrace_disable 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:56.917 ************************************ 00:36:56.917 START TEST nvmf_target_disconnect_tc2 00:36:56.917 ************************************ 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # nvmf_target_disconnect_tc2 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3091594 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3091594 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 3091594 ']' 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:56.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:56.917 10:30:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:56.917 [2024-05-15 10:30:42.639599] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:36:56.917 [2024-05-15 10:30:42.639644] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:56.917 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.179 [2024-05-15 10:30:42.720093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:57.179 [2024-05-15 10:30:42.752800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.179 [2024-05-15 10:30:42.752837] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.179 [2024-05-15 10:30:42.752844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.179 [2024-05-15 10:30:42.752850] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.179 [2024-05-15 10:30:42.752856] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.179 [2024-05-15 10:30:42.753357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:36:57.179 [2024-05-15 10:30:42.753617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:36:57.179 [2024-05-15 10:30:42.753734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:36:57.179 [2024-05-15 10:30:42.753735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 Malloc0 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 [2024-05-15 10:30:43.481923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 [2024-05-15 10:30:43.522003] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:57.754 [2024-05-15 10:30:43.522360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3091800 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:57.754 10:30:43 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:58.016 EAL: No free 2048 kB hugepages reported on node 1 00:36:59.938 10:30:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3091594 00:36:59.938 10:30:45 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Write completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 Read completed with error (sct=0, sc=8) 00:36:59.938 starting I/O failed 00:36:59.938 [2024-05-15 10:30:45.558739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:59.938 [2024-05-15 10:30:45.559189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.559823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.559854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.560213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.560726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.560756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.561500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.561921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.561932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.562562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.563161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.563172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.563721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.564267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.564276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.564863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.565509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.565538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.565920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.566552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.566581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.567144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.567803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.567833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.568204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.568636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.568665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.569047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.569685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.569716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.570279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.570700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.570730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.571173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.571560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.571569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.571996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.572674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.572703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.573162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.573732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.573743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.574338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.574905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.574913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.575328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.575790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.575799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.938 qpair failed and we were unable to recover it. 00:36:59.938 [2024-05-15 10:30:45.576049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.938 [2024-05-15 10:30:45.576591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.576601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.576955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.577487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.577495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.578017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.578507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.578515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.578858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.579447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.579455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.580019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.580680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.580710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.581274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.581919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.581950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.582586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.583020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.583031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.583544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.583849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.583861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.584412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.584991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.584999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.585629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.586026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.586034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.586552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.587154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.587165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.587811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.588495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.588524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.589083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.589769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.589798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.590512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.591109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.591119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.591770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.592211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.592222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.592767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.593261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.593272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.593934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.594593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.594622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.594876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.595538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.595567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.596115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.596638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.596667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.597237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.597838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.597868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.598534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.599133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.599144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.599793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.600516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.600545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.601059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.601702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.601731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.602260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.602906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.602936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.603533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.603803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.603820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.604248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.604754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.604761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.605516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.606116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.606126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.606656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.607237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.607246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.607768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.608497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.939 [2024-05-15 10:30:45.608526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.939 qpair failed and we were unable to recover it. 00:36:59.939 [2024-05-15 10:30:45.609080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.609647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.609676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.610247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.610874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.610903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.611583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.612138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.612148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.612875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.613572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.613601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.614135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.614795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.614823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.615494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.616095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.616105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.616717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.617512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.617541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.618098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.618718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.618747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.619200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.619823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.619852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.620517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.621126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.621136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.621829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.622547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.622576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.623150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.623747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.623776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.624513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.625079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.625089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.625731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.626512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.626541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.627063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.627208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.627223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.627779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.628239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.628247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.628784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.629513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.629543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.630113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.630728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.630758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.631266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.631796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.631825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.632514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.632957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.632966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.633585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.634193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.634203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.634805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.635221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.635228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.635784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.636515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.636544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.637073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.637532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.637562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.637984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.638628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.638657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.639213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.639649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.639657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.640110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.640660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.640690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.641264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.641815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.641847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.642506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.643046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.643056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.940 [2024-05-15 10:30:45.643734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.644525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.940 [2024-05-15 10:30:45.644555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.940 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.645106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.645679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.645708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.646260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.646916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.646944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.647586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.648191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.648201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.648836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.649564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.649594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.649970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.650626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.650655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.651243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.651711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.651719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.652278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.652956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.652985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.653520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.654136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.654146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.654692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.654975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.654993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.655663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.656219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.656229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.656912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.657604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.657634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.658067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.658739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.658769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.659514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.660077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.660087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.660736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.661531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.661561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.662129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.662792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.662821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.663494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.664093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.664104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.664738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.665215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.665225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.665573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.666182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.666193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.666746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.667273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.667280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.667912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.668637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.668667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.668902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.669536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.669566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.670138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.670665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.670694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.671246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.671776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.671805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.672303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.672698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.672706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.673254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.673839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.673847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.674285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.674736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.674765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.675501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.676078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.676088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.676683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.677287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.677302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.677930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.678560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.678590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.941 qpair failed and we were unable to recover it. 00:36:59.941 [2024-05-15 10:30:45.679156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.941 [2024-05-15 10:30:45.679917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.679947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.680609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.681178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.681189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.681849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.682524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.682553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.683104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.683634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.683663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.684219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.684870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.684899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.685710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.686513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.686542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.687070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.687726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.687755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.688130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.688791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.688821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.689252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.689775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.689804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.690495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.691092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.691102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.691763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.692477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.692506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.693052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.693631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.693661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.694080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.694329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.694347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.694863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.695485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.695514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.695950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.696452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.696460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.697025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.697641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.697670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.698223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.698763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.698772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.699215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.699807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.699836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.700524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.701079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.701089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.701750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.702469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.702498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.703032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.703697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.703725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.704298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.704942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.704971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.705603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.706194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.706207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.942 qpair failed and we were unable to recover it. 00:36:59.942 [2024-05-15 10:30:45.706840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.942 [2024-05-15 10:30:45.707529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.707558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.708118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.708671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.708700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.709248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.709882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.709911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.710598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.711154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.711164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.711797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.712274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.712285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.712953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.713646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.713675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.714229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.714819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.714848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.715496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.716092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.716102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.716744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.717313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.717332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.717877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.718523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.718555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.719113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.719739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.719768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.720468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.721025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.721035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.721667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.722257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.722267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.722903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.723552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.723581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.724124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.724668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.724697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.725242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.725601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.725629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.726199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.726731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.726739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.727275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.727780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:59.943 [2024-05-15 10:30:45.727810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:36:59.943 qpair failed and we were unable to recover it. 00:36:59.943 [2024-05-15 10:30:45.728511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.729109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.729120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.729762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.730460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.730493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.731030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.731697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.731726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.732277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.732944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.732973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.733579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.734135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.734146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.734818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.735507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.735536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.736163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.736602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.736631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.737181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.737790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.737819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.738454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.739005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.739014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.739683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.740280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.740296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.740935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.741621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.741651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.742198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.742872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.742905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.743526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.743801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.743818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.744391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.744919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.744928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.745460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.745991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.745998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.746540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.747107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.747115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.747773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.748446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.748476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.749013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.749685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.749714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.750263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.750868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.750897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.751549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.752094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.752104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.752735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.753479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.753509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.754064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.754730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.754759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.755469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.756077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.756087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.756717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.757285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.757301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.757949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.758639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.758668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.759229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.759853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.759882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.760451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.761005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.761015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.761650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.762246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.211 [2024-05-15 10:30:45.762256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.211 qpair failed and we were unable to recover it. 00:37:00.211 [2024-05-15 10:30:45.762615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.763170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.763180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.763750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.764463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.764492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.765039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.765692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.765721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.766289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.766938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.766967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.767611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.768204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.768215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.768849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.769254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.769265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.769881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.770576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.770605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.771159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.771787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.771816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.772458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.773060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.773070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.773729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.774452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.774482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.775032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.775694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.775723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.776270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.776888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.776917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.777568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.778165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.778175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.778832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.779494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.779524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.780076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.780746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.780775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.781450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.781998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.782008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.782645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.783233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.783243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.783866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.784516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.784545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.785077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.785736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.785765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.786457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.787015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.787026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.787655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.788252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.788262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.788914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.789555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.789584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.790131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.790795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.790824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.791480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.791755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.791773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.792305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.792876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.792884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.793440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.794009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.794017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.794571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.795099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.795106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.795735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.796449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.796478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.797024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.797560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.797590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.798151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.798804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.798832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.799473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.800072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.800082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.212 [2024-05-15 10:30:45.800722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.801475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.212 [2024-05-15 10:30:45.801504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.212 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.802049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.802706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.802735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.803305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.803878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.803885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.804536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.805131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.805141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.805775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.806486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.806515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.807049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.807713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.807742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.808314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.808894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.808903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.809577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.810134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.810144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.810784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.811484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.811514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.812050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.812680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.812710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.813278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.813902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.813931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.814569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.815160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.815170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.815675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.816270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.816280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.816803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.817504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.817533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.818130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.818715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.818745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.819191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.819819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.819848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.820485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.821081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.821091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.821760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.822482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.822511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.823075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.823740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.823769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.824497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.825087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.825097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.825731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.826281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.826299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.826950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.827632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.827662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.828235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.828869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.828898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.829558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.830155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.830165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.830830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.831530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.831559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.832107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.832775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.832804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.833235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.833835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.833864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.834529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.835127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.835137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.835768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.836464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.836493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.837061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.837731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.837761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.838509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.839059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.839070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.839703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.840300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.840310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.213 [2024-05-15 10:30:45.840850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.841471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.213 [2024-05-15 10:30:45.841500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.213 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.842047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.842714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.842742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.843266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.843893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.843923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.844557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.845152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.845162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.845879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.846435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.846464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.847015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.847638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.847667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.848236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.848588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.848617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.849164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.849810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.849840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.850479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.851046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.851056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.851731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.852008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.852024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.852684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.853280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.853295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.853836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.854496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.854525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.855079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.855750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.855779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.856459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.857054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.857064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.857575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.858174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.858184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.858904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.859596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.859625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.860163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.860784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.860813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.861453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.862050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.862061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.862722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.863465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.863495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.864044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.864717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.864746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.865286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.865906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.865936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.866176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.866607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.866616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.867188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.867737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.867745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.868286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.868851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.868860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.869546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.870117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.870127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.870768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.871481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.871510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.872056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.872677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.872706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.873257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.873872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.873902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.874543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.875143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.875153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.875786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.876480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.876509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.877070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.877744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.877773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.878455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.879005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.214 [2024-05-15 10:30:45.879015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.214 qpair failed and we were unable to recover it. 00:37:00.214 [2024-05-15 10:30:45.879649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.880251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.880261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.880896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.881588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.881617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.882182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.882801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.882830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.883493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.884095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.884105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.884745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.885454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.885482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.886030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.886688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.886717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.887279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.887940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.887969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.888611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.889165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.889175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.889727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.890449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.890478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.891016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.891636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.891668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.892222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.892798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.892806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.893449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.894006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.894016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.894644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.895239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.895249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.895878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.896300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.896311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.896985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.897640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.897670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.898218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.898893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.898922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.899562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.900159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.900169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.900833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.901523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.901552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.902118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.902258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.902272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.902830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.903486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.903518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.904064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.904685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.904714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.905161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.905668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.905697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.906258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.906926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.906955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.907585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.908180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.908190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.908833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.909506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.909535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.910081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.910696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.910725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.215 qpair failed and we were unable to recover it. 00:37:00.215 [2024-05-15 10:30:45.911284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.215 [2024-05-15 10:30:45.911910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.911939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.912579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.913044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.913053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.913676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.914274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.914284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.914937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.915581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.915614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.916176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.916839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.916868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.917512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.918107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.918117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.918650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.919213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.919223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.919855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.920541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.920570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.921135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.921794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.921823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.922461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.923060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.923070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.923709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.924059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.924070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.924690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.925268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.925278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.925987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.926422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.926433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.926983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.927643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.927675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.928222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.928822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.928851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.929508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.930061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.930071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.930578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.931174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.931183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.931738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.932312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.932327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.932861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.933521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.933551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.934100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.934708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.934737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.935263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.935932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.935961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.936668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.937219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.937230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.937873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.938555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.938585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.939131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.939628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.939657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.940186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.940810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.940839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.941508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.942103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.942113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.942629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.943180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.943191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.943748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.944279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.944287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.944940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.945582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.945612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.946149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.946795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.946825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.947462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.947929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.947940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.948569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.949165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.949175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.216 qpair failed and we were unable to recover it. 00:37:00.216 [2024-05-15 10:30:45.949712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.216 [2024-05-15 10:30:45.950282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.950297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.950943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.951633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.951662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.952221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.952891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.952920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.953588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.954186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.954197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.954835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.955482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.955512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.956059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.956646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.956676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.957231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.957816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.957845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.958489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.958970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.958980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.959630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.960099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.960109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.960756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.961449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.961478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.962029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.962704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.962732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.963284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.963640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.963667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.964242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.964820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.964829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.965457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.966043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.966053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.966688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.967284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.967301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.967941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.968632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.968660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.969200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.969792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.969821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.970484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.971079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.971089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.971749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.972451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.972480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.973019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.973690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.973719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.974303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.974839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.974847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.975503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.975777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.975793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.976247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.976823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.976832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.977468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.977907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.977918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.978613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.979215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.979224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.979759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.980449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.980478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.981023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.981641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.981671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.982119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.982786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.982815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.983452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.984018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.984029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.984669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.985267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.985277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.985906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.986596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.986625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.987007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.987654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.987683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.217 qpair failed and we were unable to recover it. 00:37:00.217 [2024-05-15 10:30:45.988249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.988835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.217 [2024-05-15 10:30:45.988864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.989514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.990110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.990120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.990756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.991449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.991478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.992014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.992686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.992716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.993286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.993893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.993922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.994608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.995212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.995222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.995850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.996543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.996572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.997120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.997788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.997817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.998468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.999020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.218 [2024-05-15 10:30:45.999030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.218 qpair failed and we were unable to recover it. 00:37:00.218 [2024-05-15 10:30:45.999672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.000108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.000120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.000757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.001488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.001517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.002069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.002728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.002758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.003294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.003920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.003949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.004588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.005184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.005194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.005834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.006520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.006549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.007098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.007722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.007751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.008498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.009051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.009061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.009721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.010178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.010188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.010819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.011235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.011245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.011873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.012515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.012544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.013111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.013782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.013811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.014451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.015042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.015052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.015631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.016069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.016081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.016792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.017477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.017506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.018065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.018735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.018764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.019314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.019887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.019895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.020525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.021003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.021013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.021681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.022277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.022287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.022843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.023512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.023541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.024087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.024754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.024783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.025449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.026004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.026014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.026643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.027240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.027250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.485 qpair failed and we were unable to recover it. 00:37:00.485 [2024-05-15 10:30:46.027903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.485 [2024-05-15 10:30:46.028591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.028620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.029168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.029830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.029860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.030503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.031098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.031107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.031729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.032211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.032220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.032847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.033496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.033525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.034077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.034746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.034775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.035451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.036025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.036035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.036671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.037272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.037281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.037901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.038548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.038577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.039128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.039790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.039820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.040454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.041049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.041060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.041702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.042300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.042310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.042966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.043589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.043618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.044166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.044673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.044703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.045251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.045872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.045901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.046537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.047087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.047097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.047721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.048437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.048466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.049013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.049665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.049694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.050221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.050811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.050840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.051517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.052066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.052077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.052746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.053299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.053310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.053903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.054552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.054582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.054972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.055623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.055653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.056059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.056676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.056706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.057264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.057933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.057962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.058533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.058971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.058982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.059630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.060186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.060196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.060441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.061011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.061020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.061579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.062155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.062163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.486 qpair failed and we were unable to recover it. 00:37:00.486 [2024-05-15 10:30:46.062828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.486 [2024-05-15 10:30:46.063519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.063548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.064097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.064730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.064760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.065499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.066051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.066061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.066678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.067228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.067238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.067869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.068562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.068591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.069140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.069793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.069822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.070484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.071090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.071100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.071754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.072214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.072225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.072861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.073531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.073560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.074105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.074746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.074776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.075504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.076052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.076062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.076731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.077505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.077535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.078085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.078749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.078778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.079017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.079612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.079641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.080085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.080707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.080736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.081315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.081882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.081890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.082297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.082852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.082860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.083519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.084116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.084127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.084666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.085262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.085272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.085927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.086617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.086650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.087203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.087839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.087868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.088512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.089107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.089118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.089759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.090432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.090461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.091020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.091692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.091721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.092269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.092884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.092913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.093553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.094113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.094123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.094372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.094910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.094918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.095573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.096172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.096182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.096750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.097275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.097283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.487 [2024-05-15 10:30:46.097902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.098596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.487 [2024-05-15 10:30:46.098628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.487 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.099174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.099836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.099865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.100522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.101125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.101135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.101793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.102480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.102509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.103062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.103720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.103749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.104306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.104908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.104937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.105578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.106138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.106148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.106787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.107453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.107482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.108042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.108664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.108693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.109242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.109875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.109905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.110528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.111080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.111093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.111726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.112317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.112335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.112926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.113547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.113576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.114052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.114708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.114738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.115314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.115835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.115843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.116376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.116900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.116907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.117443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.118013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.118021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.118658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.119254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.119264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.119835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.120501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.120530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.121081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.121644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.121673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.122223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.122890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.122923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.123557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.124162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.124171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.124744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.125474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.125503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.126096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.126721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.126750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.127299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.127944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.127974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.128512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.129109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.129119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.129736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.130451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.130479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.131028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.131700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.131729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.132259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.132921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.132950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.133585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.134182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.134192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.488 qpair failed and we were unable to recover it. 00:37:00.488 [2024-05-15 10:30:46.134591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.488 [2024-05-15 10:30:46.135155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.135165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.135701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.136277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.136285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.136845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.137450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.137479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.138027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.138690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.138719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.139295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.139936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.139965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.140608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.141204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.141214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.141877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.142565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.142595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.143141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.143799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.143828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.144487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.145088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.145099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.145737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.146296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.146306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.146923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.147614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.147643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.148192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.148742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.148772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.149503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.150051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.150062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.150737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.151501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.151530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.152262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.152841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.152851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.153386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.153952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.153960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.154522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.155075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.155083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.155588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.156189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.156199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.156763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.157508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.157538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.158165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.158782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.158811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.159518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.160092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.160103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.160750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.161216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.161226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.161889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.162537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.489 [2024-05-15 10:30:46.162566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.489 qpair failed and we were unable to recover it. 00:37:00.489 [2024-05-15 10:30:46.163010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.163591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.163620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.164182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.164760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.164768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.165494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.166044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.166054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.166734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.167471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.167500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.168052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.168716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.168745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.169305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.169915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.169923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.170553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.171144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.171154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.171692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.172240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.172250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.172840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.173495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.173523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.174083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.174734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.174764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.175315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.175868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.175876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.176521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.176999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.177009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.177621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.178222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.178232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.178688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.179259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.179267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.179900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.180589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.180618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.181169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.181800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.181828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.182489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.183040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.183050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.183701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.184304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.184316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.184915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.185533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.185562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.186153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.186799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.186828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.187372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.187922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.187930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.188579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.189176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.189186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.189737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.190212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.190222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.190856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.191534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.191563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.192115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.192761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.192790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.193475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.194080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.194090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.194633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.195236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.195246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.195963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.196652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.196681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.197232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.197827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.197857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.490 [2024-05-15 10:30:46.198535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.199120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.490 [2024-05-15 10:30:46.199129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.490 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.199769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.200494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.200523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.201072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.201687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.201716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.202257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.202747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.202776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.203492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.203974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.203984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.204499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.205095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.205106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.205784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.206484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.206513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.207063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.207691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.207720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.208279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.208812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.208841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.209211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.209858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.209887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.210530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.211132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.211142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.211778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.212466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.212496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.213057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.213716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.213746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.214181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.214583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.214612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.215158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.215813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.215842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.216483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.217081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.217091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.217756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.218455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.218484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.218892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.219558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.219586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.220142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.220778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.220807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.221453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.222048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.222058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.222685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.223281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.223297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.223939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.224609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.224638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.225197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.225786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.225816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.226454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.227048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.227058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.227723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.228272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.228283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.228940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.229634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.229664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.230212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.230851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.230880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.231518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.231990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.232000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.232667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.233102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.233113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.233798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.234451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.234480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.491 qpair failed and we were unable to recover it. 00:37:00.491 [2024-05-15 10:30:46.235027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.491 [2024-05-15 10:30:46.235642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.235671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.235914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.236459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.236467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.237040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.237655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.237685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.238233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.238778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.238787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.239448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.240000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.240009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.240641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.241238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.241248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.241871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.242522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.242551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.243101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.243777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.243807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.244448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.244848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.244860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.245511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.246112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.246123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.246762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.247456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.247485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.248030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.248689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.248717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.249265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.249881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.249910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.250549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.251145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.251155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.251816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.252490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.252520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.253057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.253677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.253706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.254251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.254892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.254922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.255647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.255922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.255939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.256384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.256919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.256926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.257461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.257992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.258000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.258666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.259270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.259280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.259900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.260528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.260557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.261123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.261783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.261813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.262535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.263140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.263150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.263786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.264453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.264482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.264930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.265549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.265584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.266146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.266713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.266723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.267261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.267787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.267795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.268500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.269106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.269116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.492 qpair failed and we were unable to recover it. 00:37:00.492 [2024-05-15 10:30:46.269773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.492 [2024-05-15 10:30:46.270501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.270530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.493 qpair failed and we were unable to recover it. 00:37:00.493 [2024-05-15 10:30:46.271066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.271719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.271749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.493 qpair failed and we were unable to recover it. 00:37:00.493 [2024-05-15 10:30:46.272298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.272822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.272851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.493 qpair failed and we were unable to recover it. 00:37:00.493 [2024-05-15 10:30:46.273521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.274135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.493 [2024-05-15 10:30:46.274146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.493 qpair failed and we were unable to recover it. 00:37:00.761 [2024-05-15 10:30:46.274785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.761 [2024-05-15 10:30:46.275512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.761 [2024-05-15 10:30:46.275542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.761 qpair failed and we were unable to recover it. 00:37:00.761 [2024-05-15 10:30:46.276080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.761 [2024-05-15 10:30:46.276705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.276734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.277282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.277922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.277951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.278591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.279194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.279204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.279818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.280253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.280263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.280965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.281614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.281643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.282190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.282802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.282835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.283486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.284083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.284094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.284757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.285208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.285222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.285769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.286322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.286331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.286778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.287310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.287318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.287824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.288349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.288358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.288905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.289460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.289468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.290042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.290530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.290538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.291085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.291614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.291643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.292178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.292699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.292708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.293254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.293897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.293930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.294174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.294700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.294709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.295254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.295776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.295784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.296028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.296661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.296690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.297282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.297911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.297940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.298602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.299197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.299207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.299791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.300500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.300529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.301081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.301713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.301742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.301986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.302541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.302550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.303116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.303787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.303817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.304492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.305084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.305097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.305209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.305732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.305742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.306279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.306846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.306854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.307533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.308124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.308133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.308779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.309496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.309526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.310076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.310712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.310741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.311289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.311908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.311938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.312597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.313203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.313213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.313853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.314539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.314567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.315113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.315782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.315811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.316499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.317064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.317078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.317572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.318173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.318183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.318833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.319531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.319560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.320105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.320705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.320734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.321273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.321901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.321930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.322587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.323168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.323179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.323894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.324583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.324613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.325149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.325804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.325833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.326496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.327047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.327057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.327714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.328316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.328334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.328895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.329584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.329613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.330163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.330815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.330845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.331489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.331765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.331782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.332317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.332817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.332824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.333364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.333921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.333929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.334467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.335034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.335041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.335580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.336146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.336153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.336797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.337494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.337523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.338074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.338734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.338762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.339315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.339900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.762 [2024-05-15 10:30:46.339908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.762 qpair failed and we were unable to recover it. 00:37:00.762 [2024-05-15 10:30:46.340553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.341153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.341163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.341743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.342492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.342521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.343072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.343591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.343620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.343862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.344409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.344418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.344966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.345542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.345550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.346123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.346751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.346780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.347159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.347792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.347821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.348064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.348733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.348762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.349314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.349892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.349900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.350550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.351147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.351157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.351730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.352285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.352303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.352807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.353477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.353507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.354050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.354667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.354696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.355213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.355848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.355878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.356535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.357093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.357103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.357737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.358497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.358526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.359072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.359743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.359772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.360145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.360788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.360817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.361484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.362088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.362098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.362727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.363507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.363536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.364080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.364744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.364774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.365209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.365832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.365862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.366623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.367169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.367178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.367824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.368518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.368547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.368789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.369304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.369314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.369870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.370191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.370198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.370736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.371261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.371268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.371660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.372257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.372267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.372810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.373490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.373519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.374086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.374715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.374744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.375302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.375930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.375959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.376514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.377065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.377075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.377746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.378510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.378539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.379105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.379725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.379753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.380299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.380734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.380741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.381197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.381737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.381745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.382282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.382953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.382983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.383634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.384190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.384200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.384831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.385555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.385584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.386130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.386638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.386667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.387214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.387815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.387844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.388529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.389128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.389138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.389783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.390489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.390518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.391066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.391702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.391731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.392178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.392827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.392856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.393533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.394121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.394131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.394768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.395041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.395058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.395778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.396515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.396545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.397091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.397715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.397743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.398314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.398899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.398907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.399537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.400086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.400096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.400687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.401267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.401277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.763 qpair failed and we were unable to recover it. 00:37:00.763 [2024-05-15 10:30:46.401901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.763 [2024-05-15 10:30:46.402179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.402196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.402741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.403502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.403532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.404080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.404590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.404619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.405173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.405797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.405827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.406488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.406928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.406937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.407584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.408139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.408149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.408793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.409495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.409524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.410063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.410720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.410749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.411314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.411602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.411610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.412175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.412864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.412893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.413532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.414128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.414138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.414779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.415477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.415506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.416053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.416684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.416714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.417246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.417875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.417904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.418546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.419151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.419161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.419564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.420112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.420123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.420762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.421491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.421520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.422092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.422711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.422740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.423288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.423648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.423676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.424239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.424834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.424864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.425530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.426128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.426138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.427188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.427198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.427782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.428503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.428532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.429078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.429721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.429751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.430302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.430896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.430925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.431168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.431711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.431720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.432265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.432873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.432903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.433554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.433943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.433953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.434601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.435152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.435162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.435792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.436502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.436531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.436929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.437585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.437614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.438165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.438798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.438827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.439481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.440033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.440043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.440692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.441288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.441307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.441638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.442237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.442247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.442883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.443574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.443604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.444150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.444602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.444632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.445158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.445568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.445596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.446029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.446537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.446566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.447128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.447598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.447627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.448076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.448700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.448730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.449301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.449933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.449962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.450612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.451212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.451222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.451854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.452279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.452289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.452897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.453588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.453617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.454190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.454843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.454872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.455541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.456140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.456150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.764 qpair failed and we were unable to recover it. 00:37:00.764 [2024-05-15 10:30:46.456786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.457525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.764 [2024-05-15 10:30:46.457554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.458099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.458744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.458773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.459277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.459939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.459971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.460620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.461170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.461180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.461834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.462499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.462528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.462975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.463613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.463642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.464206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.464650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.464659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.464899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.465451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.465460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.465988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.466510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.466518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.467004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.467617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.467646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.468172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.468770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.468799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.469482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.470079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.470089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.470731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.471282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.471301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.471953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.472641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.472670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.473217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.473927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.473957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.474597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.475149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.475158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.475797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.476503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.476532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.477079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.477682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.477712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.478073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.478321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.478337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.478773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.479510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.479539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.480090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.480710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.480740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.481280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.481939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.481968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.482624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.482902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.482922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.483610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.484172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.484182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.484744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.485271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.485278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.485897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.486609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.486638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.487206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.487823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.487852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.488505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.489107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.489117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.489795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.490497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.490526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.491070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.491734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.491764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.492283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.492922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.492952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.493588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.494144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.494154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.494786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.495501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.495534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.495964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.496504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.496533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.497095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.497754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.497784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.498494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.499071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.499081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.499720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.500185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.500195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.500827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.501484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.501513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.502080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.502736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.502765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.503498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.503967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.503977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.504605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.505207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.505217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.505758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.506286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.506298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.506815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.507288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.507304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.507937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.508589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.508618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.509048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.509712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.509741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.510281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.510955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.510984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.511225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.511772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.511781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.512503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.513099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.513110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.513742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.514508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.514538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.514788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.515328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.515337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.515867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.516394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.516402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.516958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.517530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.517538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.765 qpair failed and we were unable to recover it. 00:37:00.765 [2024-05-15 10:30:46.518074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.765 [2024-05-15 10:30:46.518664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.518693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.518935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.519488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.519497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.520065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.520637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.520666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.521207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.521712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.521721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.522259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.522865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.522895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.523535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.524106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.524115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.524755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.525032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.525049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.525692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.526281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.526297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.526940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.527677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.527706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.528246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.528832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.528861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.529514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.530068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.530078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.530793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.531482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.531511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.532054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.532720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.532749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.533288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.533918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.533947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.534610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.535127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.535137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.535777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.536300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.536311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.536960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.537237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.537254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.537925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.538614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.538643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.539212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.539832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.539861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.540521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.541123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.541133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.541777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.542497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.542526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.542936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.543499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.543528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.544052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.544676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.544705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.545256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.545875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.545904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.546541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.547137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.547147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.547530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.548136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.548146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.548829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.549530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.549559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.550105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.550624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.550653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.551189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.551697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:00.766 [2024-05-15 10:30:46.551726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:00.766 qpair failed and we were unable to recover it. 00:37:00.766 [2024-05-15 10:30:46.552264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.033 [2024-05-15 10:30:46.552886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.033 [2024-05-15 10:30:46.552916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.033 qpair failed and we were unable to recover it. 00:37:01.033 [2024-05-15 10:30:46.553574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.554126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.554137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.554789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.555481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.555510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.556058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.556726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.556755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.557315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.557893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.557901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.558565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.559124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.559134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.559774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.560496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.560525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.561078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.561750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.561778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.562495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.563096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.563107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.563760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.564497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.564526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.564765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.565325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.565334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.565788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.566351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.566359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.566915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.567409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.567418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.567983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.568428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.568437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.568989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.569557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.569566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.570102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.570488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.570517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.571066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.571712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.571742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.572303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.572734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.572742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.573295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.573821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.573830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.574520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.575085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.575095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.575727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.576497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.576527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.577078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.577769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.577798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.034 [2024-05-15 10:30:46.578178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.578845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.034 [2024-05-15 10:30:46.578874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.034 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.579525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.580081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.580091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.580716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.581275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.581285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.581936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.582580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.582609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.583153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.583728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.583757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.584295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.584918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.584947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.585578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.586153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.586163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.586792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.587489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.587519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.588072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.588750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.588779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.589498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.590097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.590106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.590737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.591299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.591309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.591913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.592505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.592534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.593052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.593684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.593712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.594269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.594839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.594868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.595499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.596119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.596129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.596681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.597145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.597155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.597782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.598480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.598509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.599059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.599727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.599756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.600299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.600875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.600903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.601557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.602152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.602162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.602822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.603480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.603509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.604056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.604723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.604752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.605305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.605819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.605827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.606193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.606432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.606446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.606991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.607569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.607576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.608120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.608737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.608766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.609313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.609827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.609835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.035 qpair failed and we were unable to recover it. 00:37:01.035 [2024-05-15 10:30:46.610257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.035 [2024-05-15 10:30:46.610789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.610797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.611251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.611826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.611855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.612532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.613136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.613146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.613790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.614495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.614524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.614768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.615276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.615284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.615836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.616284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.616295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.616900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.617590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.617618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.618156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.618778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.618807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.619474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.620079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.620089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.620730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.621503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.621532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.622076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.622738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.622767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.623512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.624073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.624083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.624735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.625506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.625535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.626082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.626709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.626738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.627268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.627881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.627910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.628552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.629150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.629160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.629856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.630236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.630245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.630889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.631573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.631602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.632140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.632714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.632743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.633279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.633941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.633970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.634628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.635190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.635200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.635863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.636296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.636308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.636872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.637566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.637594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.638145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.638808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.638837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.639085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.639721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.639750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.640299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.640898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.640927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.036 [2024-05-15 10:30:46.641566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.642165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.036 [2024-05-15 10:30:46.642176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.036 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.642835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.643479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.643508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.644067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.644737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.644767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.645494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.646092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.646103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.646746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.647303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.647314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.647871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.648526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.648556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.649117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.649329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.649345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.649766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.650339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.650350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.650908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.651432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.651440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.652021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.652549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.652557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.653122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.653379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.653394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.653816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.654392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.654400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.654743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.655296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.655303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.655836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.656460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.656489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.657017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.657564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.657593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.658133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.658761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.658789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.659527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.660081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.660091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.660722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.661500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.661532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.662095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.662717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.662746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.663198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.663796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.663825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.664501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.665097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.665107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.665759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.666483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.666512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.667073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.667742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.667771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.668501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.669099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.669109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.669751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.670504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.670533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.671083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.671704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.671733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.672299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.672798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.672826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.673480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.674043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.674056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.674689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.675238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.675248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.037 [2024-05-15 10:30:46.675865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.676565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.037 [2024-05-15 10:30:46.676594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.037 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.676849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.677360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.677369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.677963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.678490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.678498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.679034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.679693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.679722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.680263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.680871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.680900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.681564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.682117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.682127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.682673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.683275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.683285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.683944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.684628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.684656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.685201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.685831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.685863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.686525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.687102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.687112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.687589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.688182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.688192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.688830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.689530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.689559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.689825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.690380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.690389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.691000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.691265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.691272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.691706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.692222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.692229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.692782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.693504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.693533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.694097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.694728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.694758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.695502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.696059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.696069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.696746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.697528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.697557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.698106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.698752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.698781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.699498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.700063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.700073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.700742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.701515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.701544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.701749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.702251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.702259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.038 qpair failed and we were unable to recover it. 00:37:01.038 [2024-05-15 10:30:46.702810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.703490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.038 [2024-05-15 10:30:46.703520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.704066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.704589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.704619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.705023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.705650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.705679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.706216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.706727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.706735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.707265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.707850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.707879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.708539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.709135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.709145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.709787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.710526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.710555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.711104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.711723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.711752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.712297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.712871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.712901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.713543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.714103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.714113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.714750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.715498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.715527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.716076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.716751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.716780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.717188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.717823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.717852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.718530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.718948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.718959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.719583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.720178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.720188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.720740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.721265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.721272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.721910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.722159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.722175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.722725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.723295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.723303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.723848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.724519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.724548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.725142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.725780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.725810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.726482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.727078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.727089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.727760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.728494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.728524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.729064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.729625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.729654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.730203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.730820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.730849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.731532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.732132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.732142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.732796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.733241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.733253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.733915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.734623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.734652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.735053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.735725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.735754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.039 qpair failed and we were unable to recover it. 00:37:01.039 [2024-05-15 10:30:46.736314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.039 [2024-05-15 10:30:46.736851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.736860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.737566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.738121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.738131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.738643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.739124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.739134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.739811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.740101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.740118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.740657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.741223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.741233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.741890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.742538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.742567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.743016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.743694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.743723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.744271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.744834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.744863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.745549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.746115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.746124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.746788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.747520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.747548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.748087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.748681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.748710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.749256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.749696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.749726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.750276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.750927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.750957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.751618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.752216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.752226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.752822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.753097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.753113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.753846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.754501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.754531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.755080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.755592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.755621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.756169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.756836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.756865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.757563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.758128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.758138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.758783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.759515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.759544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.760096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.760627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.760657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.760797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.761246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.761255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.761887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.762530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.762559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.763123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.763782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.763811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.764489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.765043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.765053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.765644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.766196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.766206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.040 qpair failed and we were unable to recover it. 00:37:01.040 [2024-05-15 10:30:46.766753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.040 [2024-05-15 10:30:46.767294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.767303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.767915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.768611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.768640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.769186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.769833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.769862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.770524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.771079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.771089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.771727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.772169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.772181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.772576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.773174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.773183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.773760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.774227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.774235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.774847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.775294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.775302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.775978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.776669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.776698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.777246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.777730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.777759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.778269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.778886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.778914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.779562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.779933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.779943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.780599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.781081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.781091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.781811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.782500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.782529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.782961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.783567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.783596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.784146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.784815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.784845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.785522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.786114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.786124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.786764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.787501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.787530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.788129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.788776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.788805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.789229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.789881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.789910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.790566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.791008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.791019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.791701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.792172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.792183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.792932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.793628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.793657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.794216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.794849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.794878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.795537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.796093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.796103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.796748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.797501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.797530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.798069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.798736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.798766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.799210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.799860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.799889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.800564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.801141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.801151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.041 [2024-05-15 10:30:46.801844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.802534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.041 [2024-05-15 10:30:46.802563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.041 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.803124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.803682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.803711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.804152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.804809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.804838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.805527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.806079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.806089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.806734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.807300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.807311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.807962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.808535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.808564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.809113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.809776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.809805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.810517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.811113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.811122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.811847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.812553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.812582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.812820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.813337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.813345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.813884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.814460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.814468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.815003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.815650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.815679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.816272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.816733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.816741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.817281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.817835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.817864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.818524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.818994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.819005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.819666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.820230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.820240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.820866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.821551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.821581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.042 qpair failed and we were unable to recover it. 00:37:01.042 [2024-05-15 10:30:46.822134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.042 [2024-05-15 10:30:46.822755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.309 [2024-05-15 10:30:46.822784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.823501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.824198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.824210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.824929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.825606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.825635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.826192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.826823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.826852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.827315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.827847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.827855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.828419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.828981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.828989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.829530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.830099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.830687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.831175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.831183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.831788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.832498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.832527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.832966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.833610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.833639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.834183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.834731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.834739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.835286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.835855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.835863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.836516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.837123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.837133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.837770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.838536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.838566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.839197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.839841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.839871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.840525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.841094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.841103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.841755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.842519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.842551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.843113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.843772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.843800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.844518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.845114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.845124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.845836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.846538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.846567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.847116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.847766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.847795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.848510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.849108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.849118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.849770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.850530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.850559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.851118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.851858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.851887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.852560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.853161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.853171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.853682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.854281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.854297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.854934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.855534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.855566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.856119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.856628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.856657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.310 [2024-05-15 10:30:46.857207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.857588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.310 [2024-05-15 10:30:46.857616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.310 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.858171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.858860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.858889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.859514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.860066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.860076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.860854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.861575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.861604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.862152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.862807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.862836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.863516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.864116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.864127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.864683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.865175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.865185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.865825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.866580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.866609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.867158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.867766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.867798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.868534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.869135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.869144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.869808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.870490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.870519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.871068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.871729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.871758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.872541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.873130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.873140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.873686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.874300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.874310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.874893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.875581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.875610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.876157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.876652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.876681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.877126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.877685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.877714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.878508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.879072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.879082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.879755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.880259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.880272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.880957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.881549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.881578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.882139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.882662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.882691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.883241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.883884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.883914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.884576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.885145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.885156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.885808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.886305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.886316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.886958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.887635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.887664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.888212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.888827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.888856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.889518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.890082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.890091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.890677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.890954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.890972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.891603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.892187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.892197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.892466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.892994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.311 [2024-05-15 10:30:46.893003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.311 qpair failed and we were unable to recover it. 00:37:01.311 [2024-05-15 10:30:46.893550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.894127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.894135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.894653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.895257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.895267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.895912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.896549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.896577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.897052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.897635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.897664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.898205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.898838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.898867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.899536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.900142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.900152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.900792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.901515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.901544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.901790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.902358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.902367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.902942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.903503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.903511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.904051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.904622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.904652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.905160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.905769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.905798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.906263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.906859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.906888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.907546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.908021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.908031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.908733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.909509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.909538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.909988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.910567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.910597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.911139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.911739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.911768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.912503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.913111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.913121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.913759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.914516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.914545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.914930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.915634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.915663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.916230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.916604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.916612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.917174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.917729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.917737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.918194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.918795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.918824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.919490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.920079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.920089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.920597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.921184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.921194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.921741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.922275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.922283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.922830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.923522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.923551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.924008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.924673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.924702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.925182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.925757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.925766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.926505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.927094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.927104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.312 [2024-05-15 10:30:46.927773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.928511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.312 [2024-05-15 10:30:46.928540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.312 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.929021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.929667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.929696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.930247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.930873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.930902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.931524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.932081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.932091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.932666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.933272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.933282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.933968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.934663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.934692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.934937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.935630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.935660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.936233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.936706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.936714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.937252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.937946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.937975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.938724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.939303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.939314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.939879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.940544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.940573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.941137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.941788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.941818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.942514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.943109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.943119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.943605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.943841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.943857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.944328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.944897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.944905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.945469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.945861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.945869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.946413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.946982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.946990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.947402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.947829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.947836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.948367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.948892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.948899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.949463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.949990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.949997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.950544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.951075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.951082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.951567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.952121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.952131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.952775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.953507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.953536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.954197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.954845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.954874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.955582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.956150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.956161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.956849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.957545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.957574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.958162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.958812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.958841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.959287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.959919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.959948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.960586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.961283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.961300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.962039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.962744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.962773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.313 qpair failed and we were unable to recover it. 00:37:01.313 [2024-05-15 10:30:46.963527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.313 [2024-05-15 10:30:46.964117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.964127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.964664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.965227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.965237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.965895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.966494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.966523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.967077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.967700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.967729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.968299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.968966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.968995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.969525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.970080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.970090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.970755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.971526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.971555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.972106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.972776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.972805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.973509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.974084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.974093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.974746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.975513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.975541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.976143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.976792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.976821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.977518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.977994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.978005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.978656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.979219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.979228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.979869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.980569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.980599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.981198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.981692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.981722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.982278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.982705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.982713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.983166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.983730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.983759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.984217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.984820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.984849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.985274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.985890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.985919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.986567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.987149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.987159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.987906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.988586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.988615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.989164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.989753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.989783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.990515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.991118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.991128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.314 [2024-05-15 10:30:46.991761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.992515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.314 [2024-05-15 10:30:46.992545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.314 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:46.993100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.993736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.993764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:46.994529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.995114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.995124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:46.995788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.996499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.996528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:46.997078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.997698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.997727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:46.998186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.998884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:46.998913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:46.999551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.000028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.000038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.000286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.000865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.000874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.001569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.002179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.002189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.002708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.003254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.003262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.003800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.004534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.004563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.005093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.005765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.005794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.006530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.006888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.006898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.007528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.008079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.008089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.008622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.009219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.009229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.009751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.010514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.010544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.011087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.011688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.011717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.012259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.012863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.012893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.013589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.014028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.014039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.014663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.015264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.015274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.015832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.016071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.016088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.016781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.017249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.017259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.017693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.018281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.018296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.018850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.019540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.019569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.020116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.020713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.020742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.021173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.021749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.021778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.022482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.023050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.023060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.023694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.024286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.024306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.024878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.025535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.025564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.026118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.026770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.026799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.315 qpair failed and we were unable to recover it. 00:37:01.315 [2024-05-15 10:30:47.027506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.315 [2024-05-15 10:30:47.028109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.028119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.028700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.029270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.029280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.029845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.030542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.030571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.030813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.031255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.031263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.031817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.032509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.032539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.032986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.033532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.033561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.034110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.034732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.034761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.035514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.036077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.036090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.036602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.037206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.037216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.037859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.038576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.038605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.038851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.039394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.039403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.040023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.040594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.040601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.041168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.041718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.041747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.041997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.042607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.042637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.043189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.043750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.043760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.044311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.044844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.044852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.045415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.045949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.045956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.046522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.046978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.046989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.047624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.048176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.048186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.048727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.049263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.049271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.049776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.050514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.050544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.051168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.051778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.051807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.052529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.053139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.053150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.053740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.054180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.054190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.054849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.055128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.055145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.055691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.056247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.056257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.056920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.057659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.057688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.058305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.058918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.058950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.059486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.060056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.060066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.060814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.061527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.316 [2024-05-15 10:30:47.061556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.316 qpair failed and we were unable to recover it. 00:37:01.316 [2024-05-15 10:30:47.062108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.062762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.062792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.063513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.064099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.064110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.064767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.065510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.065540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.066166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.066797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.066826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.067524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.067968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.067979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.068513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.069116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.069127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.069675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.070199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.070209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.070849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.071531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.071560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.072106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.072699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.072728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.073171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.073817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.073847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.074578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.075186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.075196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.075582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.076182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.076193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.076658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.077229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.077237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.077800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.078508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.078538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.079072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.079280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.079305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.079864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.080488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.080518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.081086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.081689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.081718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.082247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.082817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.082847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.083487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.084046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.084056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.084610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.085216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.085226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.085870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.086503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.086533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.087078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.087751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.087780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.088314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.088921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.088929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.089585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.090182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.090192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.090429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.090994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.091003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.091259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.091631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.091641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.092197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.092760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.092768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.093516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.094083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.094093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.094675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.095228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.095240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.317 [2024-05-15 10:30:47.095893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.096499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.317 [2024-05-15 10:30:47.096529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.317 qpair failed and we were unable to recover it. 00:37:01.318 [2024-05-15 10:30:47.097074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.318 [2024-05-15 10:30:47.097655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.318 [2024-05-15 10:30:47.097690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.318 qpair failed and we were unable to recover it. 00:37:01.318 [2024-05-15 10:30:47.098133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.318 [2024-05-15 10:30:47.098657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.318 [2024-05-15 10:30:47.098687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.318 qpair failed and we were unable to recover it. 00:37:01.318 [2024-05-15 10:30:47.099257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.099930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.099959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.585 qpair failed and we were unable to recover it. 00:37:01.585 [2024-05-15 10:30:47.100687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.101268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.101278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.585 qpair failed and we were unable to recover it. 00:37:01.585 [2024-05-15 10:30:47.101946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.102219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.102235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.585 qpair failed and we were unable to recover it. 00:37:01.585 [2024-05-15 10:30:47.102875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.103573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.103602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.585 qpair failed and we were unable to recover it. 00:37:01.585 [2024-05-15 10:30:47.104159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.104765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.585 [2024-05-15 10:30:47.104795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.105241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.105911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.105941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.106608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.107163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.107173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.107753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.108514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.108543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.109100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.109829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.109859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.110524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.111130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.111140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.111883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.112499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.112528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.113084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.113634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.113662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.114204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.114737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.114766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.115497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.116106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.116115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.116774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.117513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.117542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.118091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.118650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.118679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.119239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.119883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.119911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.120511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.121066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.121076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.121697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.122298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.122309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.122850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.123536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.123565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.124096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.124693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.124723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.125092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.125631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.125660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.126221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.126839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.126868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.127515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.128087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.128097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.128600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.129201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.129212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.129848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.130579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.130608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.131153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.131681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.131710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.132147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.132681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.132710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.133271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.133945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.133974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.586 qpair failed and we were unable to recover it. 00:37:01.586 [2024-05-15 10:30:47.134629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.586 [2024-05-15 10:30:47.135193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.135203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.135669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.136271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.136281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.136842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.137521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.137550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.138139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.138349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.138363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.138952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.139609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.139638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.140208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.140744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.140773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.141520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.142122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.142132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.142801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.143277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.143286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.143927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.144653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.144682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.145234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.145914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.145943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.146584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.147140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.147150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.147725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.148303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.148314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.149051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.149731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.149760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.150513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.151077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.151087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.151678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.152110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.152121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.152767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.153248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.153258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.153786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.154513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.154542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.155113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.155718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.155746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.156296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.156945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.156974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.157608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.158101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.158112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.158814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.159514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.159543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.160109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.160726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.160755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.161005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.161640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.161669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.162223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.162834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.162863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.163501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.164073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.164083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.164722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.165261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.165271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.165917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.166213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.166223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.166911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.167621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.167650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.168264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.168904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.168934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.587 [2024-05-15 10:30:47.169599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.170184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.587 [2024-05-15 10:30:47.170194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.587 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.170879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.171569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.171598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.172131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.172791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.172819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.173519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.174114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.174124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.174772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.175483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.175513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.176067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.176734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.176763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.177514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.178102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.178112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.178699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.179520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.179549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.180115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.180820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.180850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.181217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.181770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.181800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.182513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.183068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.183077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.183587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.184151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.184161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.184810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.185276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.185286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.185930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.186624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.186653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.186896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.187573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.187602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.188153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.188860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.188889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.189530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.190150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.190160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.190689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.191258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.191268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.191920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.192601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.192629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.193161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.193826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.193856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.194530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.195137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.195147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.195800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.196041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.196059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.196605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.197004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.197013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.197680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.198301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.198312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.198878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.199540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.199570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.200096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.200626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.200655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.201215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.201878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.201907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.202523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.203077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.203088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.203656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.204229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.204239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.204883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.205575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.205604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.588 qpair failed and we were unable to recover it. 00:37:01.588 [2024-05-15 10:30:47.206152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.206720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.588 [2024-05-15 10:30:47.206749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.207288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.207926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.207954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.208601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.209182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.209192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.210667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.210696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.211248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.211931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.211960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.212627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.213200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.213210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.214016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.214673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.214702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.215254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.215911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.215941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.216604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.217182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.217194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.217852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.218556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.218585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.218942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.219633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.219662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.220220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.220640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.220648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.221199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.221417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.221431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.222019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.222644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.222673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.223287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.223835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.223843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.224509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.225066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.225076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.225748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.226516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.226545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.227099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.227753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.227783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.228506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.229109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.229122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.229680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.230235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.230245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.230918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.231514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.231543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.232085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.232754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.232784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.233515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.234116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.234126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.234786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.235219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.235230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.235804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.236515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.236544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.237090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.237308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.237326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.237868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.238529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.238559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.238995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.239637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.239667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.240227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.240830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.240842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.241507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.242107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.242117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.589 qpair failed and we were unable to recover it. 00:37:01.589 [2024-05-15 10:30:47.242668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.589 [2024-05-15 10:30:47.243233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.243243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.243896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.244152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.244169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.244738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.245504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.245533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.246085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.246721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.246750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.247523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.248101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.248111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.248762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.249507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.249537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.250117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.250623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.250653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.251065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.251736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.251765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.252537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.252978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.252991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.253657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.254127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.254137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.254694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.255133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.255144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.255817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.256529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.256559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.257126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.257660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.257690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.258134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.258776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.258806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.259532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.260110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.260120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.260833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.261526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.261556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.262000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.262627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.262657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.263228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.263876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.263906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.264565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.265162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.265173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.265719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.266272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.266281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.266914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.267609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.267638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.268072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.268711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.268740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.269186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.269731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.269761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.270246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.270901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.270930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.271538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.272107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.272117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.590 qpair failed and we were unable to recover it. 00:37:01.590 [2024-05-15 10:30:47.272824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.273545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.590 [2024-05-15 10:30:47.273574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.274125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.274715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.274745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.275301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.275845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.275874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.276541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.276922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.276932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.277742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.278304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.278315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.278870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.279578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.279607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.280156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.280703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.280733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.281129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.281656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.281686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.282132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.282744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.282774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.283521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.284004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.284014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.284578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.285184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.285195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.285759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.286289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.286301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.286951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.287618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.287648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.288214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.288838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.288868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.289523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.290092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.290102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.290740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.291020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.291038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.291680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.292272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.292282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.292819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.293539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.293568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.294026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.294678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.294707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.295278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.295949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.295978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.296639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.297228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.297238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.297835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.298497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.298527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.299079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.299656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.299686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.300228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.300699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.300729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.301281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.301828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.301858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.302498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.303101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.303112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.303751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.304504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.304534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.305081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.305692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.305722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.591 qpair failed and we were unable to recover it. 00:37:01.591 [2024-05-15 10:30:47.306525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.591 [2024-05-15 10:30:47.307129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.307139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.307694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.308250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.308260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.308956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.309612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.309641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.310163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.310697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.310726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.311169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.311819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.311849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.312551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.313151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.313162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.313803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.314528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.314558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.315098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.315688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.315718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.316165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.316764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.316794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.317580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.317854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.317873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.318288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.318859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.318867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.319542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.320131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.320141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.320794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.321267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.321278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.321835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.322504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.322533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.323058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.323646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.323675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.324107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.324724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.324753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.325516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.326118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.326128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.326759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.327163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.327173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.327757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.328517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.328547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.329100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.329685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.329715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.330261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.330827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.330856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.331579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.332023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.332033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.332644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.333220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.333231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.333668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.334247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.334257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.334810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.335277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.335284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.336002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.336711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.336741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.336994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.337635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.337664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.338204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.338825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.338854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.339518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.340119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.340129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.592 qpair failed and we were unable to recover it. 00:37:01.592 [2024-05-15 10:30:47.340656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.592 [2024-05-15 10:30:47.341266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.341277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.341818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.342535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.342565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.343127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.343678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.343708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.344222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.344795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.344825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.345510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.346078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.346088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.346748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.347515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.347544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.348097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.348640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.348670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.349120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.349828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.349858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.350529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.350975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.350985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.351590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.352189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.352199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.352606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.353149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.353156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.353834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.354540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.354571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.355181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.355673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.355703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.356536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.357122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.357133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.357659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.358262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.358272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.358916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.359582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.359611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.360164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.360631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.360661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.361220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.361850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.361879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.362480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.363052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.363063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.363664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.364243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.364253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.364903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.365616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.365646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.366225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.366668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.366698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.367247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.367765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.367795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.368517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.369122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.369132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.369705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.370184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.370194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.370810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.371500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.371530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.372085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.372649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.372679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.373248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.373625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.373653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.593 [2024-05-15 10:30:47.374219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.374758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.593 [2024-05-15 10:30:47.374766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.593 qpair failed and we were unable to recover it. 00:37:01.861 [2024-05-15 10:30:47.375523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.376101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.376111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.861 qpair failed and we were unable to recover it. 00:37:01.861 [2024-05-15 10:30:47.376741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.377313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.377333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.861 qpair failed and we were unable to recover it. 00:37:01.861 [2024-05-15 10:30:47.377911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.378579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.378608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.861 qpair failed and we were unable to recover it. 00:37:01.861 [2024-05-15 10:30:47.379201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.379730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.379738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.861 qpair failed and we were unable to recover it. 00:37:01.861 [2024-05-15 10:30:47.380276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.380934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.380964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.861 qpair failed and we were unable to recover it. 00:37:01.861 [2024-05-15 10:30:47.381627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.382208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.861 [2024-05-15 10:30:47.382219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.382868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.383549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.383578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.384153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.384686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.384716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.385262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.385923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.385953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.386610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.387174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.387184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.387854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.388558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.388588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.389140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.389791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.389821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.390520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.391101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.391111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.391766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.392522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.392552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.393116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.393776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.393806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.394514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.395140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.395150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.395800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.396484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.396513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.397075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.397714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.397743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.398541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.399132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.399142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.399681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.400249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.400260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.400904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.401611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.401641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.402195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.402739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.402769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.403522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.404124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.404134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.404742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.405226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.405237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.405751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.406516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.406545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.407098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.407664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.407693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.862 qpair failed and we were unable to recover it. 00:37:01.862 [2024-05-15 10:30:47.408236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.408861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.862 [2024-05-15 10:30:47.408891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.409602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.410213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.410223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.410798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.411523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.411556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.411991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.412570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.412600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.413033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.413676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.413706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.414260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.414629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.414658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.415101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.415713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.415743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.416488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.417041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.417051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.417695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.418303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.418314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.418897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.419524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.419554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.420123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.420763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.420794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.421228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.421881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.421911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.422582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.423188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.423201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.423753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.424514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.424543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.425084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.425706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.425735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.426280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.426964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.426993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.427682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.428154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.428164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.428808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.429491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.429521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.430065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.430655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.430684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.431252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.431585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.431614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.432189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.432629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.432638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.433187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.433752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.433760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.863 [2024-05-15 10:30:47.434313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.434553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.863 [2024-05-15 10:30:47.434570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.863 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.435112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.435556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.435564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.436002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.436619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.436649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.437207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.437802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.437810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.438516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.438997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.439007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.439685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.440243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.440253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.440921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.441589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.441619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.442179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.442714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.442744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.443303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.443833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.443841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.444279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.444428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.444444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.444923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.445325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.445337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.445908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.446489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.446497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.446931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.447395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.447403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.447950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.448498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.448505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.449078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.449710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.449740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.450522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.450996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.451006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.451639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.452221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.452231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.452897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.453677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.453706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.454512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.455071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.455081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.455757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.456203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.456212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.456764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.457233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.457243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.457769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.458508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.458538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.458988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.459617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.459646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.460198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.460740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.460748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.864 qpair failed and we were unable to recover it. 00:37:01.864 [2024-05-15 10:30:47.461306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.461761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.864 [2024-05-15 10:30:47.461768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.462314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.462751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.462759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.463284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.463870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.463878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.464232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.464831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.464838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.465499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.466122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.466132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.466876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.467572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.467602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.468139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.468741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.468770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.469521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.469959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.469969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.470636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.471207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.471217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.471876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.472522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.472552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.473120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.473742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.473771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.474515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.475118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.475128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.475795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.476511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.476540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.477089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.477775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.477805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.478523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.479012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.479022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.479553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.480104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.480115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.480254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.480820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.480829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.481385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.481966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.481974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.482632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.483220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.483230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.483953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.484604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.484633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.485187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.485641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.485671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.486056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.486712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.486742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.487273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.487861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.487890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.488624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.489232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.489242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.489809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.490536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.490566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.865 [2024-05-15 10:30:47.491199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.491754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.865 [2024-05-15 10:30:47.491763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.865 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.492224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.492839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.492869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.493541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.494117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.494127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.494788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.495498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.495528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.496078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.496706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.496736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.497284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.497942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.497972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.498563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.499005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.499015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.499651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.500253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.500263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.500914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.501600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.501630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.502087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.502728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.502757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.503314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.503680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.503688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.503925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.504369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.504378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.504955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.505422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.505430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.505997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.506407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.506415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.507013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.507547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.507555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.508100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.508616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.508645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.509199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.509745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.509753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.510183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.510840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.510869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.511525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.512123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.512133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.512799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.513524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.513560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.513934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.514588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.514617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.515178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.515841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.515849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.516562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.517114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.517124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.517768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.518486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.518515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.866 qpair failed and we were unable to recover it. 00:37:01.866 [2024-05-15 10:30:47.519056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.866 [2024-05-15 10:30:47.519629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.519658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.520219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.520842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.520871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.521528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.522078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.522088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.522735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.523170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.523180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.523792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.524235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.524245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.524715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.525293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.525301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.525920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.526564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.526593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.527152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.527773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.527802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.528529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.529127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.529137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.529599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.529867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.529882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.530433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.530804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.530811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.531358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.531886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.531894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.532307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.532833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.532841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.533272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.533805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.533813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.534500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.535103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.535113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.867 qpair failed and we were unable to recover it. 00:37:01.867 [2024-05-15 10:30:47.535755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.536313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.867 [2024-05-15 10:30:47.536332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.536871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.537533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.537562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.537993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.538524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.538532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.539073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.539718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.539747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.540297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.540636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.540664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.541212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.541730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.541738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.542516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.543122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.543132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.543764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.544498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.544527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.545078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.545745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.545774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.546492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.547050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.547060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.547719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.548456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.548485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.549034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.549530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.549559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3091594 Killed "${NVMF_APP[@]}" "$@" 00:37:01.868 [2024-05-15 10:30:47.550131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:01.868 [2024-05-15 10:30:47.550785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.550818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:01.868 [2024-05-15 10:30:47.551481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.868 [2024-05-15 10:30:47.552082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.552093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.552631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.553109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.553120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.553762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.554246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.554255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.554904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.555596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.555625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.556173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.556817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.556846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.557520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.558118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 [2024-05-15 10:30:47.558128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.868 qpair failed and we were unable to recover it. 00:37:01.868 [2024-05-15 10:30:47.558778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3092628 00:37:01.868 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3092628 00:37:01.868 [2024-05-15 10:30:47.559504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.559533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # '[' -z 3092628 ']' 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:01.869 [2024-05-15 10:30:47.560077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:01.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:01.869 [2024-05-15 10:30:47.560726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 10:30:47 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:01.869 [2024-05-15 10:30:47.560756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.561208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.561658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.561687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.562255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.562892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.562922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.563621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.563889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.563907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.564417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.564993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.565001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.565631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.566240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.566251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.566809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.567523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.567552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.568145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.568779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.568809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.569199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.569834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.569864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.570620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.571219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.571229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.571932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.572629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.572658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.573206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.573706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.573735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.574285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.574960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.574990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.575634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.576239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.576250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.576892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.577594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.577623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.578178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.578834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.578863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.579521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.580076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.580086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.580757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.581506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.581536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.582071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.582698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.582727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.869 [2024-05-15 10:30:47.583500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.583880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.869 [2024-05-15 10:30:47.583891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.869 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.584560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.585113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.585123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.585624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.586229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.586240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.586782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.587508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.587537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.588088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.588740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.588770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.589529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.590124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.590134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.590885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.591613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.591642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.592214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.592862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.592891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.593559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.594086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.594097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.594756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.595197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.595208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.595870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.596612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.596642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.597077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.597736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.597765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.598511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.599119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.599129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.599668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.600280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.600302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.600830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.601254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.601264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.601912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.602569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.602599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.603014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.603685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.603714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.604272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.604972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.605002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.605664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.606133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.606143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.606547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.607156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.607165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.607835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.608561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.608590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.609149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.609282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.609295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.609921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.610604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.610633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.610631] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:01.870 [2024-05-15 10:30:47.610674] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:01.870 [2024-05-15 10:30:47.611206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.611848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.611879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.612534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.613101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.613112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.870 qpair failed and we were unable to recover it. 00:37:01.870 [2024-05-15 10:30:47.613772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.870 [2024-05-15 10:30:47.614214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.614225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.614902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.615614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.615643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.616217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.616626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.616655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.617207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.617816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.617827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.618072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.618750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.618779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.619190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.619698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.619706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.620279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.620791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.620821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.621492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.622110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.622120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.622815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.623265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.623276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.623952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.624664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.624694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.625268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.625914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.625944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.626624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.627237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.627247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.627903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.628604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.628633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.629195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.629720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.629752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.630460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.631087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.631097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.631481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.632062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.632072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.632746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.633193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.633204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.633860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.634165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.634175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.634718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.635257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.635265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.635901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.636605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.636634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.637240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.637871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.637900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.638553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.638872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.638881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.639513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.640122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.640132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.640779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.641489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.641524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.642102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.642795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.642824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 EAL: No free 2048 kB hugepages reported on node 1 00:37:01.871 [2024-05-15 10:30:47.643492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.644056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.644065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.644745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.645188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.645198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.645852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.646657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.646688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.871 qpair failed and we were unable to recover it. 00:37:01.871 [2024-05-15 10:30:47.646940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.647555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.871 [2024-05-15 10:30:47.647585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.872 qpair failed and we were unable to recover it. 00:37:01.872 [2024-05-15 10:30:47.648143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.872 [2024-05-15 10:30:47.648784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.872 [2024-05-15 10:30:47.648813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.872 qpair failed and we were unable to recover it. 00:37:01.872 [2024-05-15 10:30:47.648941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.872 [2024-05-15 10:30:47.649494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:01.872 [2024-05-15 10:30:47.649503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:01.872 qpair failed and we were unable to recover it. 00:37:01.872 [2024-05-15 10:30:47.649891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.650475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.650484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.651102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.651671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.651701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.652257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.652790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.652823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.653490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.653930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.653940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.654589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.655151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.655161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.655815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.656257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.656268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.656892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.657599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.657628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.658203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.658844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.658874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.659557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.659840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.140 [2024-05-15 10:30:47.659858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.140 qpair failed and we were unable to recover it. 00:37:02.140 [2024-05-15 10:30:47.660418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.660968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.660975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.661521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.662059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.662066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.662688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.663255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.663265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.663913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.664614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.664647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.665205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.665727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.665756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.666513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.666795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.666811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.667382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.667961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.667969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.668353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.668928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.668936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.669483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.670015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.670023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.670571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.671144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.671152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.671508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.672107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.672117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.672762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.673501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.673530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.673776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.674318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.674326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.674874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.675288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.675300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.675855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.676496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.676525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.676821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.677354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.677362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.677912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.678319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.678327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.678730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.679249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.679257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.679562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.679921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.679929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.680480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.680879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.680886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.681430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.681998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.682005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.682545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.683069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.683076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.683702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.684306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.684318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.684655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.685198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.685205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.685767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.686338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.686346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.686887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.687418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.687426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.687989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.688515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.688523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.689104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.689626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.689656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.690209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.690752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.141 [2024-05-15 10:30:47.690760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.141 qpair failed and we were unable to recover it. 00:37:02.141 [2024-05-15 10:30:47.691236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.691875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.691903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.692500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.692775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.692792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.693060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.693556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:02.142 [2024-05-15 10:30:47.693598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.693605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.694152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.694806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.694835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.695241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.695876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.695909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.696504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.696974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.696984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.697634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.698208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.698218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.698765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.699142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.699150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.699805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.700085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.700095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.700764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.701488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.701517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.701953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.702598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.702628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.703172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.703691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.703720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.704278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.704841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.704870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.705315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.705787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.705795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.706531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.706901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.706911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.707167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.707691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.707700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.708252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.708920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.708949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.709622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.710205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.710215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.710748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.711319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.711338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.711915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.712382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.712390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.712944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.713522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.713530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.714052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.714254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.714267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.714817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.715398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.715406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.715819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.716117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.716124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.716703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.717284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.717296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.717929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.718573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.718602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.142 qpair failed and we were unable to recover it. 00:37:02.142 [2024-05-15 10:30:47.718969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.142 [2024-05-15 10:30:47.719628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.719657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.720211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.720782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.720790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.721549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.722156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.722167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.722823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.723129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.723138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.723381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.723972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.723980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.724630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.725230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.725240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.725302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.143 [2024-05-15 10:30:47.725326] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.143 [2024-05-15 10:30:47.725335] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.143 [2024-05-15 10:30:47.725341] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.143 [2024-05-15 10:30:47.725347] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.143 [2024-05-15 10:30:47.725507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:37:02.143 [2024-05-15 10:30:47.725726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:37:02.143 [2024-05-15 10:30:47.725848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:37:02.143 [2024-05-15 10:30:47.725909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.725849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:37:02.143 [2024-05-15 10:30:47.726594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.726624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.727192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.727842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.727872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.728250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.728790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.728819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.729487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.729962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.729971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.730610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.731217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.731227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.731775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.732177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.732184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.732744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.733511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.733540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.734096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.734661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.734690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.735246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.735876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.735906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.736567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.736869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.736879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.737429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.737821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.737833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.738282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.738694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.738702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.739141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.739773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.739802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.740486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.740935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.740945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.741493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.742106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.742116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.742522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.742970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.742980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.743619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.744226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.744236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.744789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.745498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.745527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.745770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.746321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.746330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.746680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.747110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.747118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.143 qpair failed and we were unable to recover it. 00:37:02.143 [2024-05-15 10:30:47.747534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.143 [2024-05-15 10:30:47.748110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.748122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.748661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.749089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.749099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.749520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.750161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.750171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.750664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.751123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.751131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.751770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.752487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.752516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.753080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.753697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.753726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.754250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.754899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.754928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.755499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.755741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.755756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.756306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.756830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.756837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.757398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.757957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.757964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.758373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.758928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.758940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.759320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.759911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.759919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.760182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.760688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.760696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.760905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.761388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.761396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.761703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.762234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.762242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.762785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.763357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.763365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.763769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.764302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.764311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.764749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.765326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.765334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.765875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.766452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.766460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.767002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.767532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.767540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.768094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.768744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.768776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.769502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.770105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.770115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.770506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.770695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.770705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.770959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.771465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.771473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.772027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.772567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.772575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.773142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.773802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.773832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.144 qpair failed and we were unable to recover it. 00:37:02.144 [2024-05-15 10:30:47.774282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.144 [2024-05-15 10:30:47.774677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.774706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.775160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.775831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.775860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.776167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.776851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.776879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.777542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.778100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.778111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.778766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.779497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.779526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.780079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.780680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.780709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.781117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.781707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.781736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.782303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.782669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.782676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.783268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.783826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.783834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.784488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.784930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.784941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.785485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.786040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.786051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.786567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.787132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.787142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.787673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.788256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.788266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.788903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.789227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.789237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.789882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.790534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.790563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.791172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.791804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.791833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.792487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.792960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.792970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.793611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.794165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.794175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.794508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.795076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.795084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.795580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.796185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.796195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.145 [2024-05-15 10:30:47.796557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.797107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.145 [2024-05-15 10:30:47.797115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.145 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.797541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.798138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.798147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.798787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.799502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.799531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.799936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.800623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.800652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.801198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.801600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.801609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.802157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.802515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.802544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.803146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.803366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.803383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.803660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.804070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.804077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.804622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.805199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.805207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.805767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.806253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.806261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.806922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.807622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.807651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.808036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.808697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.808726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.809303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.809646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.809674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.810300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.810948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.810977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.811620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.812058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.812068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.812728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.813501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.813530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.814084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.814761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.814790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.815037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.815685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.815714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.816261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.816910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.816940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.817500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.818099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.818109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.818790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.819484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.819514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.820064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.820700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.820729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.821023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.821651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.821681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.822202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.822801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.822809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.823465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.824054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.824063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.824709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.825267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.825277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.825914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.826312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.826332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.826917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.827164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.827178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.827745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.828273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.828280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.828821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.829480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.829509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.146 qpair failed and we were unable to recover it. 00:37:02.146 [2024-05-15 10:30:47.830057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.146 [2024-05-15 10:30:47.830690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.830720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.831301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.831951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.831980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.832241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.832839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.832868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.833249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.833740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.833769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.834503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.835110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.835121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.835801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.836541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.836570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.837024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.837696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.837725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.838281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.838953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.838982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.839616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.840216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.840226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.840868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.841545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.841574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.842173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.842580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.842609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.843158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.843741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.843770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.844477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.845079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.845090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.845251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.845800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.845808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.846467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.847071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.847081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.847712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.848498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.848527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.848774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.849331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.849339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.849921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.850493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.850501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.851039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.851562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.851591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.852141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.852787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.852815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.853488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.854090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.854100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.854767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.855472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.855501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.855925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.856533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.856562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.857113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.857786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.857816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.858487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.858785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.858795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.859237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.859779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.859787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.860505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.861062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.861072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.861718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.862158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.862168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.862834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.863558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.863587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.147 qpair failed and we were unable to recover it. 00:37:02.147 [2024-05-15 10:30:47.863854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.147 [2024-05-15 10:30:47.864063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.864078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.864493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.864799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.864808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.865020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.865275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.865284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.865814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.866341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.866349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.866776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.867316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.867323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.867780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.868306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.868314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.868831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.869419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.869427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.869969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.870238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.870246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.870805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.871499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.871529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.872022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.872684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.872713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.873258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.873483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.873491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.874048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.874634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.874663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.874958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.875456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.875464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.876053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.876676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.876705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.877260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.877908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.877937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.878584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.879191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.879201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.879830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.880070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.880086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.880629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.881215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.881224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.881760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.882501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.882530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.883072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.883586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.883616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.884140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.884772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.884801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.885065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.885484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.885513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.885849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.886115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.886123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.886662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.887135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.887142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.887650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.888255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.888265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.888987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.889493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.889523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.890167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.890585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.890613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.891162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.891778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.891807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.892482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.893050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.893060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.148 qpair failed and we were unable to recover it. 00:37:02.148 [2024-05-15 10:30:47.893698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.148 [2024-05-15 10:30:47.894299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.894309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.894917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.895578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.895607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.896162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.896819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.896848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.897537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.898093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.898103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.898509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.898828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.898839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.899127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.899792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.899821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.900111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.900788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.900817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.901063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.901663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.901695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.902247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.902830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.902838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.903498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.904097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.904107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.904747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.905004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.905014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.905571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.906122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.906131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.906781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.907499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.907528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.908087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.908770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.908799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.909208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.909834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.909863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.910534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.911134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.911143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.911790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.912081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.912091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.912599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.912928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.912941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.913238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.913827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.913835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.914093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.914678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.914707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.915251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.915760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.915789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.916512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.917113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.917122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.917631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.918235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.918245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.918751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.919506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.919535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.919916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.920178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.920186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.920764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.921544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.921573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.921986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.922569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.922578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.923031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.923697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.149 [2024-05-15 10:30:47.923730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.149 qpair failed and we were unable to recover it. 00:37:02.149 [2024-05-15 10:30:47.923903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.924418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.924426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.150 qpair failed and we were unable to recover it. 00:37:02.150 [2024-05-15 10:30:47.924690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.925245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.925252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.150 qpair failed and we were unable to recover it. 00:37:02.150 [2024-05-15 10:30:47.925825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.926389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.926397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.150 qpair failed and we were unable to recover it. 00:37:02.150 [2024-05-15 10:30:47.926883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.927456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.927464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.150 qpair failed and we were unable to recover it. 00:37:02.150 [2024-05-15 10:30:47.928013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.928640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.150 [2024-05-15 10:30:47.928669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.150 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.929214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.929760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.929768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.930497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.931101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.931111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.931515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.931704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.931713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.932276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.932548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.932557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.933103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.933747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.933779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.934498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.935054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.935064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.935228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.935519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.935527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.936099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.936729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.936758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.937505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.938113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.938123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.938764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.939503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.939533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.940100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.940612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.940641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.941027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.941685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.941715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.942488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.943093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.943102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.943749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.944496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.944525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.945091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.945785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.945814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.946489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.946928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.946938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.947589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.947919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.418 [2024-05-15 10:30:47.947929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.418 qpair failed and we were unable to recover it. 00:37:02.418 [2024-05-15 10:30:47.948486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.948993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.949001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.949545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.950098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.950108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.950776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.951068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.951078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.951712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.952178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.952188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.952759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.953501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.953530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.954103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.954739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.954769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.955502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.956106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.956116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.956762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.957211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.957221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.957627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.958197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.958208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.958461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.958970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.958979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.959237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.959765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.959774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.960183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.960724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.960732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.961273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.961901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.961930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.962566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.963008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.963018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.963665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.964230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.964240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.964770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.965239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.965249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.965755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.966482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.966512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.967079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.967758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.967788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.968517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.969071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.969081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.969724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.970507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.970535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.971109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.971218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.971232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.971520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.971793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.971802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.972055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.972576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.972584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.973136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.973702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.973709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.974251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.974900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.974929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.975459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.975901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.975911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.976158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.976602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.976611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.977151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.977785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.977813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.978069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.978712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.978741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.419 qpair failed and we were unable to recover it. 00:37:02.419 [2024-05-15 10:30:47.979507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.419 [2024-05-15 10:30:47.980060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.980070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.980712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.980992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.981010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.981662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.982220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.982230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.982773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.983235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.983242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.983876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.984576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.984605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.985156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.985821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.985850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.986303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.986858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.986866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.987586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.988200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.988209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.988487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.988746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.988754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.989344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.989647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.989655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.990185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.990769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.990777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.991258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.991807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.991815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.992228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.992814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.992821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.993455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.994061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.994071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.994707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.995274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.995284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.995655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.995931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.995947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.996529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.997109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.997116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.997750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.998077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.998088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.998498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.999095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:47.999105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:47.999748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.000511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.000540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.000804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.001299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.001306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.001851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.002150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.002159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.002686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.003230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.003238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.003871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.004567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.004595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.005130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.005770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.005799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.006504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.007065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.007075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.007569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.008152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.008162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.008834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.009545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.009575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.420 qpair failed and we were unable to recover it. 00:37:02.420 [2024-05-15 10:30:48.009822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.010237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.420 [2024-05-15 10:30:48.010246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.010808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.011526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.011556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.011700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.012161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.012170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.012690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.013262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.013270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.013926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.014553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.014582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.014881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.015560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.015590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.015836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.016280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.016298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.016864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.017528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.017558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.018117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.018790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.018819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.019501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.020101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.020111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.020761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.021500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.021529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.022073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.022576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.022606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.023046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.023629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.023658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.024224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.024612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.024642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.025050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.025587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.025616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.026168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.026783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.026812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.027505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.028074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.028085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.028723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.029279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.029288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.029954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.030607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.030636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.031184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.031844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.031874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.032538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.033143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.033153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.033710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.034161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.034171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.034828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.035562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.035591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.036152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.036365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.036379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.036920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.037179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.037193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.037582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.038190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.038200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.038843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.039522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.039551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.040101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.040736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.421 [2024-05-15 10:30:48.040766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.421 qpair failed and we were unable to recover it. 00:37:02.421 [2024-05-15 10:30:48.041062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.041699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.041728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.042282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.042822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.042851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.043257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.043912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.043942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.044592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.045165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.045175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.045850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.046534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.046563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.046972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.047564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.047594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.048145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.048806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.048836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.049474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.050070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.050081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.050585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.051137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.051147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.051785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.052062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.052078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.052473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.052787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.052796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.053287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.053583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.053592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.054114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.054496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.054525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.055097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.055749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.055778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.056502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.057108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.057118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.057772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.058289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.058304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.058919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.059629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.059658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.060215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.060845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.060874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.061170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.061551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.061580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.062136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.062681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.062710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.063278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.063914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.063942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.064568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.064896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.064905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.065180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.065726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.065734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.066279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.066950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.066980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.067525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.068082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.068092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.422 qpair failed and we were unable to recover it. 00:37:02.422 [2024-05-15 10:30:48.068743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.422 [2024-05-15 10:30:48.069501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.069531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.069940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.070572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.070602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.071108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.071364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.071379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.071946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.072217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.072225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.072589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.073200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.073211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.073846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.074546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.074577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.074871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.075411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.075420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.075975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.076490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.076499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.077035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.077676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.077709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.078113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.078790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.078820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.079502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.080086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.080096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.080807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.081249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.081259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.081764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.082487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.082516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.083070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.083725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.083754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.084482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.085089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.085099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.085761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.086511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.086540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.086803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.087032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.087041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.087595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.087895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.087902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.088439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.089010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.089021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.089590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.090120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.090127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.090625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.091227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.091237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.091871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.092531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.092560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.092819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.093359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.093366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.093770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.094331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.094338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.094602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.095165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.095172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.095577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.096150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.096158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.096790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.097506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.097535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.098105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.098480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.098510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.423 qpair failed and we were unable to recover it. 00:37:02.423 [2024-05-15 10:30:48.099066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.423 [2024-05-15 10:30:48.099175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.099193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.099358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.099877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.099884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.100426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.101008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.101015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.101584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.102111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.102119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.102755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.103523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.103551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.104116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.104782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.104812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.105198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.105844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.105873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.106289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.106790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.106819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.107208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.107843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.107872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.108530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.109087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.109097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.109747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.110512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.110544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.111125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.111785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.111814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.112225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.112861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.112890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.113183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.113872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.113901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.114543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.115011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.115021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.115564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.116123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.116132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.116805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.117505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.117534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.118102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.118769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.118798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.119529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.120132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.120143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.120802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.121486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.121515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.121813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.122119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.122127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.122499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.123101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.123111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.123760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.124490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.124519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.125087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.125757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.125786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.126481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.127055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.127065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.127560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.128130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.128140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.128795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.129523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.129552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.130134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.130797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.130826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.131505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.132061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.132071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.132708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.133501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.424 [2024-05-15 10:30:48.133530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.424 qpair failed and we were unable to recover it. 00:37:02.424 [2024-05-15 10:30:48.134024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.134680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.134709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.135280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.135926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.135955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.136593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.136884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.136895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.137486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.137786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.137796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.138346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.138881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.138888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.139453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.140025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.140032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.140580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.141031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.141039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.141716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.142508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.142537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.143089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.143729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.143758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.144196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.144423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.144432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.144949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.145576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.145606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.146165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.146840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.146869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.147524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.147940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.147951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.148200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.148404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.148419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.148977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.149515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.149523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.150075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.150699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.150728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.151277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.151915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.151944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.152608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.152885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.152903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.153456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.153994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.154001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.154629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.155231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.155241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.155785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.156491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.156520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.157091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.157765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.157793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.158505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.159071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.159081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.159517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.160115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.160125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.160496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.160789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.160799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.161372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.161954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.161961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.162507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.162810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.162818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.163082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.163581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.163588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.164174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.164682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.164711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.425 qpair failed and we were unable to recover it. 00:37:02.425 [2024-05-15 10:30:48.165092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.165789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.425 [2024-05-15 10:30:48.165818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.166484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.167089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.167099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.167746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.168307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.168318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.168786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.169111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.169119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.169667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.170270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.170280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.170916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.171573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.171602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.172158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.172813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.172842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.173489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.174092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.174101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.174781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.175522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.175551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.175842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.176149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.176156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.176373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.176676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.176685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.176906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.177169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.177177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.177753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.178290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.178301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.178860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.179561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.179591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.180182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.180728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.180736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.180998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.181497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.181526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.182102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.182773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.182802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.183041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.183599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.183608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.184160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.184820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.184851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.185511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.186077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.186086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.186754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.187219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.187229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.187612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.188223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.188232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.188898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.189234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.189243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.189660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.190262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.190271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.190898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.191577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.191605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.192151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.192851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.192879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.193524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.194092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.194102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.194743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.195473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.195501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.195878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.196530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.196558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.196833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.197453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.197462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.426 qpair failed and we were unable to recover it. 00:37:02.426 [2024-05-15 10:30:48.197865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.426 [2024-05-15 10:30:48.198397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.198403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.198832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.199366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.199373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.199842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.200380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.200387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.200943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.201243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.201250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.201523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.202073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.202080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.202471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.202911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.202918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.203461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.203991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.203997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.204604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.205224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.205233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.205782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.206514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.427 [2024-05-15 10:30:48.206542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.427 qpair failed and we were unable to recover it. 00:37:02.427 [2024-05-15 10:30:48.206956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.207588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.207617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.208028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.208602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.208631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.209173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.209566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.209593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.210133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.210519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.210546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.210793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.211338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.211346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.211902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.212436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.212443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.212987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.213514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.213521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.213810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.214252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.214258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.214803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.215094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.215101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.215334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.215749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.215756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.216289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.216826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.216854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.217493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.218067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.218076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.218735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.219312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.219330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.219882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.220149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.220157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.220670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.221201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.221210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.221749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.221852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.221866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.222411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.222967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.222973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.223504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.224036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.224042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.224663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.225234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.225243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.225786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.226511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.226539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.227093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.227738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.227766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.228505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.229073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.229082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.229736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.230300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.230310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.230841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.231132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.231142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.231796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.232515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.232543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.233086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.233629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.233657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.695 [2024-05-15 10:30:48.234200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.234841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.695 [2024-05-15 10:30:48.234869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.695 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.235538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.236108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.236117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.236545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.237051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.237060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.237688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.238262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.238270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.238907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.239568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.239596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.240143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.240848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.240876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.241532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.242099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.242108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.242746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.243533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.243561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.243969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.244598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.244625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.245166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.245813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.245841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.246526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.247097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.247106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.247744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.248513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.248541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.248838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.249105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.249113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.249676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.250226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.250233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.250756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.251471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.251499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.252094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.252743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.252770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.253506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.254084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.254092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.254733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.255501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.255532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.255979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.256236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.256242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.256403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.257059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.257065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.257682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.258254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.258263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.258965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.259623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.259651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.260208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.260832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.260859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.261123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.261743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.261770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.262506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.263075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.263084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.263704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.264272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.264281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.264936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.265597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.265625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.266033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.266588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.266619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.267159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.267883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.267910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.268170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.268798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.268825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.696 qpair failed and we were unable to recover it. 00:37:02.696 [2024-05-15 10:30:48.269497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.696 [2024-05-15 10:30:48.269794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.269804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.270262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.270895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.270922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.271498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.272068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.272077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.272529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.273096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.273104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.273493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.273810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.273819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.274055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.274643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.274650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.275175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.275880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.275908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.276524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.277104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.277116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.277676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.278250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.278259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.278631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.278928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.278936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.279539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.280107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.280117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.280788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.281502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.281530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.282096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.282740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.282769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.283513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.284079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.284089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.284703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.285057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.285065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.285538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.286102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.286111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.286752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.287504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.287532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.288102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.288773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.288801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.289498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.290065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.290074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.290738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.290874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.290890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.291432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.291933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.291939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.292497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.293022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.293029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.293644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.294213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.294222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.294525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.294802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.294817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.295059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.295470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.295478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.296049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.296622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.296629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.297161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.297797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.297825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.298505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.299075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.299084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.299797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.300494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.300522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.697 [2024-05-15 10:30:48.300973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.301593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.697 [2024-05-15 10:30:48.301621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.697 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.302163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.302814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.302844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.303487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.303889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.303898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.304303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.304862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.304869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.305275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.305569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.305575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.306128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.306733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.306761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.307005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.307175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.307182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.307609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.308045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.308052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.308575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.309146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.309152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.309772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.310504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.310532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.311060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.311731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.311759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.312296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.313019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.313046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.313560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.314030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.314041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.314477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.315115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.315124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.315551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.316124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.316133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.316798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.317496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.317524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.317827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.318507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.318534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.318794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.319229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.319236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.319777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.320512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.320539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.321118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.321680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.321707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.322150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.322793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.322821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.323503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.324071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.324080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.324737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.325306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.325317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.325594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.325867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.325873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.326380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.326923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.326930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.327464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.328039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.328046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.328575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.328979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.328985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.329598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.330118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.330127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.330680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.331242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.331251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.331881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.332551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.698 [2024-05-15 10:30:48.332579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.698 qpair failed and we were unable to recover it. 00:37:02.698 [2024-05-15 10:30:48.332993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.333620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.333648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.334054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.334693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.334720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.335305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.335741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.335749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.335994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.336537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.336564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.337189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.337491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.337498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.338061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.338741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.338769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.339509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.340077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.340087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.340708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.341273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.341283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.341831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.342516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.342544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.343088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.343725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.343753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.344287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.344663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.344691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.345230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.345631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.345659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.345954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.346516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.346523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.347057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.347530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.347558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.348134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.348731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.348759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.349300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.349945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.349973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.350621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.351192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.351201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.351879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.352304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.352315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.352872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.353284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.353293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.353738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.354005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.354012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.354660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.355231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.355240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.355880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.356543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.356571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.357144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.357768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.357796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.699 qpair failed and we were unable to recover it. 00:37:02.699 [2024-05-15 10:30:48.358058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.699 [2024-05-15 10:30:48.358733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.358761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.359178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.359627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.359634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.359886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.360399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.360406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.360817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.361392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.361399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.361939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.362366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.362373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.362724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.363253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.363260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.363688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.363930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.363943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.364465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.364875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.364882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.365409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.365957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.365963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.366406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.366942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.366948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.367477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.368036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.368042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.368586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.369029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.369038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.369565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.370127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.370136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.370751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.371224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.371233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.371758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.372487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.372515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.372811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.373230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.373237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.373796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.374504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.374532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.375076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.375714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.375741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.376501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.377071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.377080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.377719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.377991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.378007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.378568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.379105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.379112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.379810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.380542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.380570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.381142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.381586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.381612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.382207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.382729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.382757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.383210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.383656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.383663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.384201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.384493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.384499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.385038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.385710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.385738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 [2024-05-15 10:30:48.386275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.386930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.386958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:02.700 [2024-05-15 10:30:48.387600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # return 0 00:37:02.700 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:02.700 [2024-05-15 10:30:48.388176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.700 [2024-05-15 10:30:48.388186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.700 qpair failed and we were unable to recover it. 00:37:02.700 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:02.701 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.701 [2024-05-15 10:30:48.388909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.389236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.389246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.389904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.390509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.390536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.391083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.391742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.391770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.392491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.393059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.393069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.393742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.394517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.394545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.394980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.395642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.395671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.396304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.396534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.396547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.397132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.397801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.397829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.398492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.399070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.399079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.399748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.400191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.400200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.400841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.401496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.401524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.401823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.402281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.402288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.402867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.403541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.403569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.404114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.404651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.404679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.405313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.405915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.405923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.406568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.407135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.407144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.407782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.408517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.408546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.409075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.409779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.409807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.410482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.411041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.411050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.411671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.412229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.412238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.412871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.413161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.413170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.413442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.413995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.414003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.414350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.414949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.414956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.415489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.415896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.415903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.416147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.416674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.416681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.417200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.417888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.417915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.418553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.419123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.419132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.419776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.420497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.420525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.421145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.421680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.421709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.701 qpair failed and we were unable to recover it. 00:37:02.701 [2024-05-15 10:30:48.422249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.701 [2024-05-15 10:30:48.422901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.422929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.423590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.424157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.424166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.424792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.425070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.425087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.425820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.426297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.426308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.426920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.702 [2024-05-15 10:30:48.427586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.427615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.702 [2024-05-15 10:30:48.428239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.428635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.428666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.429203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.429575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.429603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.430143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.430778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.430806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.431241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.431798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.431826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.432533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.433099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.433108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.433651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.434096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.434106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.434731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.435513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.435540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.436086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.436717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.436744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.437502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.437826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.437835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.438436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.439010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.439016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.439663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.440068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.440077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.440703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.441272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.441281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.441919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.442509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.442538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 Malloc0 00:37:02.702 [2024-05-15 10:30:48.442981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.702 [2024-05-15 10:30:48.443670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:02.702 [2024-05-15 10:30:48.443699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.702 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.702 [2024-05-15 10:30:48.444509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.445073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.445083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.445704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.446273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.446281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.446943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.447637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.447665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.448134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.448762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.448789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.449500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.450070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.450079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.450109] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.702 [2024-05-15 10:30:48.450249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.450620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.450647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.451196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.451761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.451768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.452297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.452536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.452549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.702 qpair failed and we were unable to recover it. 00:37:02.702 [2024-05-15 10:30:48.453159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.702 [2024-05-15 10:30:48.453567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.453574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.454096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.454733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.454761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.455511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.456077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.456086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.456751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.457499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.457527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.458070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.458673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.458701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.459005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.703 [2024-05-15 10:30:48.459299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.459307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:02.703 [2024-05-15 10:30:48.459867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.703 [2024-05-15 10:30:48.460499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.460527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.461073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.461694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.461722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.462264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.462630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.462658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.462905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.463123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.463136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.463691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.464234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.464241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.464672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.465240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.465246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.465876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.466538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.466565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.467137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.467777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.467804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.468505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.468795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.468805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.469074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.469668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.469675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.470133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.470746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.470774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.703 [2024-05-15 10:30:48.471492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:02.703 [2024-05-15 10:30:48.471827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.471837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.703 [2024-05-15 10:30:48.472061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.703 [2024-05-15 10:30:48.472245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.472253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.472811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.473260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.473267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.473817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.474189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.474195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.474733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.475265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.475271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.475902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.476490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.476518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.476761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.476997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.477004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.477455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.477972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.703 [2024-05-15 10:30:48.477978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.703 qpair failed and we were unable to recover it. 00:37:02.703 [2024-05-15 10:30:48.478502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.479032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.479038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.704 qpair failed and we were unable to recover it. 00:37:02.704 [2024-05-15 10:30:48.479649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.480218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.480227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.704 qpair failed and we were unable to recover it. 00:37:02.704 [2024-05-15 10:30:48.480834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.481283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.481290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.704 qpair failed and we were unable to recover it. 00:37:02.704 [2024-05-15 10:30:48.481827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.482506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.482534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.704 qpair failed and we were unable to recover it. 00:37:02.704 [2024-05-15 10:30:48.483130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.704 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:02.704 [2024-05-15 10:30:48.483911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.704 [2024-05-15 10:30:48.483939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.704 qpair failed and we were unable to recover it. 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.967 [2024-05-15 10:30:48.484238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.967 [2024-05-15 10:30:48.484653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.484682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 [2024-05-15 10:30:48.485176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.485849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.485877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 [2024-05-15 10:30:48.486539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.486709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.486718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 [2024-05-15 10:30:48.487191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.487743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.487754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 [2024-05-15 10:30:48.487995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.488635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.488643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 [2024-05-15 10:30:48.489175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.489803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.489831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cbc000b90 with addr=10.0.0.2, port=4420 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 [2024-05-15 10:30:48.490164] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:02.967 [2024-05-15 10:30:48.490410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.967 [2024-05-15 10:30:48.490516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:02.967 [2024-05-15 10:30:48.492801] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:37:02.967 [2024-05-15 10:30:48.492836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f9cbc000b90 (107): Transport endpoint is not connected 00:37:02.967 [2024-05-15 10:30:48.492865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:02.967 [2024-05-15 10:30:48.501142] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.967 [2024-05-15 10:30:48.501266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.967 [2024-05-15 10:30:48.501287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.967 [2024-05-15 10:30:48.501299] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.967 [2024-05-15 10:30:48.501304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.967 [2024-05-15 10:30:48.501320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:02.967 10:30:48 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3091800 00:37:02.967 [2024-05-15 10:30:48.510874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.967 [2024-05-15 10:30:48.511015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.967 [2024-05-15 10:30:48.511029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.967 [2024-05-15 10:30:48.511037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.967 [2024-05-15 10:30:48.511042] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.967 [2024-05-15 10:30:48.511055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.967 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.520938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.521051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.521070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.521076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.521081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.521096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.531197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.531371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.531391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.531397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.531402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.531418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.540961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.541065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.541079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.541084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.541089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.541102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.550917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.551031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.551051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.551057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.551061] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.551077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.561011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.561127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.561146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.561153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.561158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.561174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.571078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.571198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.571213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.571218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.571222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.571236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.581094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.581199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.581213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.581218] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.581223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.581237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.591130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.591237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.591250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.591255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.591259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.591272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.601019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.601150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.601163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.601171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.601175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.601187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.611187] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.611303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.611316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.611321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.611326] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.611338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.621165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.621274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.621287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.621297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.621301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.621313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.631202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.631307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.631321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.631326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.631330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.631343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.641218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.641337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.641351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.641356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.641360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.641373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.651278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.651389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.651402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.651407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.651412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.651424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.661222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.661330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.661343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.661348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.661352] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.661365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.671365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.671465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.671478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.671484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.671488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.671500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.681332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.681439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.681452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.681457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.681461] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.681473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.691397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.691714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.691730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.691735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.691739] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.691751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.701381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.701482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.701495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.968 [2024-05-15 10:30:48.701500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.968 [2024-05-15 10:30:48.701504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.968 [2024-05-15 10:30:48.701517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.968 qpair failed and we were unable to recover it. 00:37:02.968 [2024-05-15 10:30:48.711434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.968 [2024-05-15 10:30:48.711542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.968 [2024-05-15 10:30:48.711556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.969 [2024-05-15 10:30:48.711561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.969 [2024-05-15 10:30:48.711565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.969 [2024-05-15 10:30:48.711577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.969 qpair failed and we were unable to recover it. 00:37:02.969 [2024-05-15 10:30:48.721434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.969 [2024-05-15 10:30:48.721576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.969 [2024-05-15 10:30:48.721588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.969 [2024-05-15 10:30:48.721593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.969 [2024-05-15 10:30:48.721597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.969 [2024-05-15 10:30:48.721610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.969 qpair failed and we were unable to recover it. 00:37:02.969 [2024-05-15 10:30:48.731527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.969 [2024-05-15 10:30:48.731631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.969 [2024-05-15 10:30:48.731644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.969 [2024-05-15 10:30:48.731650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.969 [2024-05-15 10:30:48.731654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.969 [2024-05-15 10:30:48.731669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.969 qpair failed and we were unable to recover it. 00:37:02.969 [2024-05-15 10:30:48.741597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.969 [2024-05-15 10:30:48.741719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.969 [2024-05-15 10:30:48.741732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.969 [2024-05-15 10:30:48.741737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.969 [2024-05-15 10:30:48.741741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.969 [2024-05-15 10:30:48.741753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.969 qpair failed and we were unable to recover it. 00:37:02.969 [2024-05-15 10:30:48.751521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:02.969 [2024-05-15 10:30:48.751621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:02.969 [2024-05-15 10:30:48.751634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:02.969 [2024-05-15 10:30:48.751639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:02.969 [2024-05-15 10:30:48.751643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:02.969 [2024-05-15 10:30:48.751655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:02.969 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.761651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.761758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.761771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.761776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.761781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.761793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.771651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.771758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.771772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.771776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.771781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.771793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.781819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.781942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.781958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.781964] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.781968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.781980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.791769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.791880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.791894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.791899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.791903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.791915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.801725] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.801847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.801867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.801873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.801877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.801893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.811727] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.811846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.811866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.811872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.811876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.811892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.821730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.821848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.821867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.821873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.821881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.821897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.831690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.831802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.831822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.831828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.831832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.831848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.841800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.841944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.841958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.841963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.841968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.841979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.851712] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.232 [2024-05-15 10:30:48.851818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.232 [2024-05-15 10:30:48.851832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.232 [2024-05-15 10:30:48.851837] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.232 [2024-05-15 10:30:48.851841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.232 [2024-05-15 10:30:48.851854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.232 qpair failed and we were unable to recover it. 00:37:03.232 [2024-05-15 10:30:48.861853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.861956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.861969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.861974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.861978] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.861990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.871785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.871957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.871971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.871976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.871980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.871993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.881926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.882037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.882051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.882055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.882060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.882072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.891943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.892066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.892085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.892091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.892096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.892112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.902150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.902254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.902268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.902274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.902278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.902297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.911976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.912109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.912123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.912128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.912135] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.912148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.922044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.922157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.922170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.922175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.922179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.922192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.932010] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.932119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.932132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.932138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.932142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.932154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.942051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.942160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.942180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.942186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.942191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.942206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.952035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.952137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.952151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.952156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.952161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.952173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.962103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.962209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.962223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.962228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.962232] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.962244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.972163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.972274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.972288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.972300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.972304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.972316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.982402] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.982507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.982520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.982525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.982530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.982542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:48.992122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.233 [2024-05-15 10:30:48.992228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.233 [2024-05-15 10:30:48.992241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.233 [2024-05-15 10:30:48.992246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.233 [2024-05-15 10:30:48.992250] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.233 [2024-05-15 10:30:48.992263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.233 qpair failed and we were unable to recover it. 00:37:03.233 [2024-05-15 10:30:49.002284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.234 [2024-05-15 10:30:49.002438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.234 [2024-05-15 10:30:49.002451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.234 [2024-05-15 10:30:49.002461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.234 [2024-05-15 10:30:49.002465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.234 [2024-05-15 10:30:49.002479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.234 qpair failed and we were unable to recover it. 00:37:03.234 [2024-05-15 10:30:49.012133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.234 [2024-05-15 10:30:49.012256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.234 [2024-05-15 10:30:49.012270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.234 [2024-05-15 10:30:49.012275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.234 [2024-05-15 10:30:49.012279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.234 [2024-05-15 10:30:49.012297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.234 qpair failed and we were unable to recover it. 00:37:03.234 [2024-05-15 10:30:49.022278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.234 [2024-05-15 10:30:49.022388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.234 [2024-05-15 10:30:49.022401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.234 [2024-05-15 10:30:49.022406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.234 [2024-05-15 10:30:49.022410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.234 [2024-05-15 10:30:49.022423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.234 qpair failed and we were unable to recover it. 00:37:03.497 [2024-05-15 10:30:49.032323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.497 [2024-05-15 10:30:49.032425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.497 [2024-05-15 10:30:49.032439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.497 [2024-05-15 10:30:49.032444] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.497 [2024-05-15 10:30:49.032448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.497 [2024-05-15 10:30:49.032461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.497 qpair failed and we were unable to recover it. 00:37:03.497 [2024-05-15 10:30:49.042369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.497 [2024-05-15 10:30:49.042512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.497 [2024-05-15 10:30:49.042526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.497 [2024-05-15 10:30:49.042531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.497 [2024-05-15 10:30:49.042535] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.497 [2024-05-15 10:30:49.042548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.497 qpair failed and we were unable to recover it. 00:37:03.497 [2024-05-15 10:30:49.052253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.497 [2024-05-15 10:30:49.052365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.497 [2024-05-15 10:30:49.052379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.497 [2024-05-15 10:30:49.052384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.497 [2024-05-15 10:30:49.052388] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.497 [2024-05-15 10:30:49.052401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.497 qpair failed and we were unable to recover it. 00:37:03.497 [2024-05-15 10:30:49.062364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.497 [2024-05-15 10:30:49.062469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.497 [2024-05-15 10:30:49.062482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.497 [2024-05-15 10:30:49.062487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.497 [2024-05-15 10:30:49.062491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.497 [2024-05-15 10:30:49.062504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.497 qpair failed and we were unable to recover it. 00:37:03.497 [2024-05-15 10:30:49.072420] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.497 [2024-05-15 10:30:49.072739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.497 [2024-05-15 10:30:49.072754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.497 [2024-05-15 10:30:49.072759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.497 [2024-05-15 10:30:49.072763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.497 [2024-05-15 10:30:49.072774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.497 qpair failed and we were unable to recover it. 00:37:03.497 [2024-05-15 10:30:49.082439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.497 [2024-05-15 10:30:49.082556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.497 [2024-05-15 10:30:49.082569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.497 [2024-05-15 10:30:49.082574] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.497 [2024-05-15 10:30:49.082578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.497 [2024-05-15 10:30:49.082591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.497 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.092486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.092622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.092637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.092642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.092646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.092657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.102508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.102629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.102643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.102648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.102652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.102664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.112526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.112642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.112655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.112661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.112665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.112677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.122587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.122691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.122704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.122709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.122713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.122725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.132476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.132600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.132614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.132619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.132623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.132639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.142608] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.142715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.142729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.142734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.142738] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.142751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.152639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.152748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.152761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.152766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.152770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.152782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.162669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.162789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.162809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.162815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.162819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.162834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.172713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.172832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.172852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.172858] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.172863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.172878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.182726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.182841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.182865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.182871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.182875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.182891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.192757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.192866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.192886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.192892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.192897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.192912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.202789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.202899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.202919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.202924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.202929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.202945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.212850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.212995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.213016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.213021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.213026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.498 [2024-05-15 10:30:49.213041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.498 qpair failed and we were unable to recover it. 00:37:03.498 [2024-05-15 10:30:49.222836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.498 [2024-05-15 10:30:49.222954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.498 [2024-05-15 10:30:49.222969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.498 [2024-05-15 10:30:49.222974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.498 [2024-05-15 10:30:49.222983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.222997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.499 [2024-05-15 10:30:49.232755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.499 [2024-05-15 10:30:49.232861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.499 [2024-05-15 10:30:49.232880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.499 [2024-05-15 10:30:49.232887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.499 [2024-05-15 10:30:49.232891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.232907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.499 [2024-05-15 10:30:49.242884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.499 [2024-05-15 10:30:49.242993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.499 [2024-05-15 10:30:49.243013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.499 [2024-05-15 10:30:49.243019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.499 [2024-05-15 10:30:49.243023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.243039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.499 [2024-05-15 10:30:49.252918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.499 [2024-05-15 10:30:49.253035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.499 [2024-05-15 10:30:49.253050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.499 [2024-05-15 10:30:49.253055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.499 [2024-05-15 10:30:49.253060] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.253072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.499 [2024-05-15 10:30:49.262933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.499 [2024-05-15 10:30:49.263045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.499 [2024-05-15 10:30:49.263064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.499 [2024-05-15 10:30:49.263070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.499 [2024-05-15 10:30:49.263075] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.263091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.499 [2024-05-15 10:30:49.272992] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.499 [2024-05-15 10:30:49.273111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.499 [2024-05-15 10:30:49.273131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.499 [2024-05-15 10:30:49.273137] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.499 [2024-05-15 10:30:49.273142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.273157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.499 [2024-05-15 10:30:49.283054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.499 [2024-05-15 10:30:49.283200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.499 [2024-05-15 10:30:49.283220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.499 [2024-05-15 10:30:49.283226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.499 [2024-05-15 10:30:49.283230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.499 [2024-05-15 10:30:49.283246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.499 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.292922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.293043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.293057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.293063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.293067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.293080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.303051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.303153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.303166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.303171] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.303175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.303188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.313077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.313178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.313192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.313196] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.313204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.313217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.323129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.323234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.323247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.323252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.323256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.323268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.333146] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.333252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.333265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.333270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.333274] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.333286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.343162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.343274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.343287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.343298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.343302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.343315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.353217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.353325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.353338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.353343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.353348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.353360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.363122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.363232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.363246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.363250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.363255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.363267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.373273] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.373382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.373395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.373400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.373405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.373417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.383267] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.383372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.383385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.383390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.383395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.383407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.393310] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.393412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.393425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.393430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.393435] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.393447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.763 qpair failed and we were unable to recover it. 00:37:03.763 [2024-05-15 10:30:49.403338] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.763 [2024-05-15 10:30:49.403445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.763 [2024-05-15 10:30:49.403458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.763 [2024-05-15 10:30:49.403467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.763 [2024-05-15 10:30:49.403471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.763 [2024-05-15 10:30:49.403483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.413380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.413492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.413506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.413511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.413515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.413528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.423386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.423490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.423503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.423509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.423513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.423525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.433425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.433533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.433546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.433551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.433555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.433568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.443374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.443481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.443495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.443500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.443504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.443516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.453524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.453657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.453670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.453675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.453679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.453690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.463534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.463638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.463651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.463656] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.463661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.463673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.473428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.473535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.473548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.473553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.473557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.473569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.483565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.483671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.483684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.483689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.483693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.483705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.493613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.493757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.493772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.493778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.493782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.493794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.503587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.503691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.503704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.503709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.503713] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.503726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.513699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.513801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.513815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.513820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.513824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.513836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.523701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.523810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.523830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.523835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.523840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.523856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.533717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.533836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.533855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.533861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.764 [2024-05-15 10:30:49.533866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.764 [2024-05-15 10:30:49.533885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.764 qpair failed and we were unable to recover it. 00:37:03.764 [2024-05-15 10:30:49.543771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.764 [2024-05-15 10:30:49.543898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.764 [2024-05-15 10:30:49.543918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.764 [2024-05-15 10:30:49.543924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.765 [2024-05-15 10:30:49.543928] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.765 [2024-05-15 10:30:49.543944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.765 qpair failed and we were unable to recover it. 00:37:03.765 [2024-05-15 10:30:49.553781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:03.765 [2024-05-15 10:30:49.553887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:03.765 [2024-05-15 10:30:49.553907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:03.765 [2024-05-15 10:30:49.553914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:03.765 [2024-05-15 10:30:49.553918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:03.765 [2024-05-15 10:30:49.553933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:03.765 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.563800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.563910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.563929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.563936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.563941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.563956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.573843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.573957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.573977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.573983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.573987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.574003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.583867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.583979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.584002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.584009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.584013] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.584029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.593922] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.594026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.594046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.594051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.594056] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.594071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.603961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.604072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.604092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.604098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.604102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.604118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.613966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.614079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.614093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.614098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.614103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.614116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.623943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.624044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.624058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.624063] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.624067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.624086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.634042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.634153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.634167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.634172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.634177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.634189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.644051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.644165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.644178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.644183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.644188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.644200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.654068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.654171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.654185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.654190] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.654194] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.654207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.664067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.664173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.664186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.664191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.664195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.664207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.674041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.674173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.674186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.674191] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.028 [2024-05-15 10:30:49.674195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.028 [2024-05-15 10:30:49.674207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.028 qpair failed and we were unable to recover it. 00:37:04.028 [2024-05-15 10:30:49.684164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.028 [2024-05-15 10:30:49.684270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.028 [2024-05-15 10:30:49.684284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.028 [2024-05-15 10:30:49.684288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.684299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.684311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.694054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.694189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.694203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.694208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.694212] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.694224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.704181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.704285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.704303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.704308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.704312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.704325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.714243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.714348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.714361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.714366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.714373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.714386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.724181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.724286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.724304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.724309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.724313] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.724326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.734286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.734394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.734408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.734413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.734417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.734430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.744198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.744309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.744323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.744328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.744332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.744346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.754373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.754482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.754496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.754501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.754505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.754518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.764395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.764504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.764517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.764522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.764526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.764539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.774416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.774525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.774538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.774543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.774548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.774560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.784415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.784518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.784531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.784536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.784540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.784552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.794504] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.794611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.794624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.794629] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.794633] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.794645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.804499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.804639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.804652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.804660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.804664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.804676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.029 [2024-05-15 10:30:49.814508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.029 [2024-05-15 10:30:49.814623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.029 [2024-05-15 10:30:49.814636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.029 [2024-05-15 10:30:49.814641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.029 [2024-05-15 10:30:49.814645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.029 [2024-05-15 10:30:49.814657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.029 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.824530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.293 [2024-05-15 10:30:49.824632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.293 [2024-05-15 10:30:49.824645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.293 [2024-05-15 10:30:49.824650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.293 [2024-05-15 10:30:49.824654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.293 [2024-05-15 10:30:49.824666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.293 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.834573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.293 [2024-05-15 10:30:49.834675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.293 [2024-05-15 10:30:49.834687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.293 [2024-05-15 10:30:49.834693] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.293 [2024-05-15 10:30:49.834697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.293 [2024-05-15 10:30:49.834710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.293 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.844633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.293 [2024-05-15 10:30:49.844737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.293 [2024-05-15 10:30:49.844750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.293 [2024-05-15 10:30:49.844755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.293 [2024-05-15 10:30:49.844759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.293 [2024-05-15 10:30:49.844772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.293 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.854543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.293 [2024-05-15 10:30:49.854651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.293 [2024-05-15 10:30:49.854664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.293 [2024-05-15 10:30:49.854669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.293 [2024-05-15 10:30:49.854673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.293 [2024-05-15 10:30:49.854685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.293 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.864632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.293 [2024-05-15 10:30:49.864756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.293 [2024-05-15 10:30:49.864769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.293 [2024-05-15 10:30:49.864774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.293 [2024-05-15 10:30:49.864778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.293 [2024-05-15 10:30:49.864789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.293 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.874688] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.293 [2024-05-15 10:30:49.874795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.293 [2024-05-15 10:30:49.874808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.293 [2024-05-15 10:30:49.874813] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.293 [2024-05-15 10:30:49.874817] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.293 [2024-05-15 10:30:49.874829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.293 qpair failed and we were unable to recover it. 00:37:04.293 [2024-05-15 10:30:49.884722] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.884826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.884839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.884844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.884848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.884860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.894716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.894829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.894845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.894850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.894854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.894867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.904756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.904864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.904884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.904890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.904895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.904910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.914781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.914889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.914909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.914915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.914920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.914936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.924724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.924856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.924870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.924875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.924879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.924891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.934760] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.934873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.934886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.934891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.934896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.934908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.944852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.944958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.944971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.944976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.944981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.944993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.954870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.954978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.954991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.954996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.955001] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.955013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.964927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.965029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.965042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.965047] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.965051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.965063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.974989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.975094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.975106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.975111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.975116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.975128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.984995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.985122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.985138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.985143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.985147] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.985159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:49.995016] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:49.995129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:49.995149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:49.995155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:49.995160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:49.995175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:50.004919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:50.005230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:50.005246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:50.005251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:50.005256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:50.005268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.294 qpair failed and we were unable to recover it. 00:37:04.294 [2024-05-15 10:30:50.015045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.294 [2024-05-15 10:30:50.015159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.294 [2024-05-15 10:30:50.015172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.294 [2024-05-15 10:30:50.015178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.294 [2024-05-15 10:30:50.015183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.294 [2024-05-15 10:30:50.015194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.025070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.025174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.025187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.025192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.025197] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.025213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.035117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.035224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.035237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.035242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.035247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.035259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.045118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.045221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.045235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.045240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.045244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.045256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.055164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.055310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.055324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.055329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.055334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.055346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.065177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.065279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.065297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.065303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.065307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.065320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.075387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.075494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.075510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.075514] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.075519] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.075532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.295 [2024-05-15 10:30:50.085232] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.295 [2024-05-15 10:30:50.085344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.295 [2024-05-15 10:30:50.085358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.295 [2024-05-15 10:30:50.085362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.295 [2024-05-15 10:30:50.085367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.295 [2024-05-15 10:30:50.085379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.295 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.095297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.095442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.095455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.095461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.095465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.095476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.105278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.105385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.105398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.105403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.105407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.105420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.115215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.115322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.115336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.115341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.115348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.115361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.125386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.125490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.125504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.125509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.125514] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.125526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.135390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.135548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.135561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.135567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.135571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.135584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.145408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.145518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.145531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.145536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.145540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.145552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.155384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.155484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.155497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.155502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.155507] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.155519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.559 qpair failed and we were unable to recover it. 00:37:04.559 [2024-05-15 10:30:50.165398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.559 [2024-05-15 10:30:50.165504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.559 [2024-05-15 10:30:50.165517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.559 [2024-05-15 10:30:50.165522] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.559 [2024-05-15 10:30:50.165526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.559 [2024-05-15 10:30:50.165538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.175483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.175593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.175607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.175612] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.175616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.175628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.185512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.185618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.185631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.185636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.185640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.185652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.195568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.195697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.195710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.195716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.195720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.195732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.205474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.205581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.205594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.205602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.205606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.205618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.215630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.215745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.215758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.215762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.215767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.215778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.225660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.225818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.225831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.225836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.225840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.225852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.235667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.235776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.235790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.235794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.235798] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.235810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.245692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.245806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.245826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.245832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.245837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.245852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.255754] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.255869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.255889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.255895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.255900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.255915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.265730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.265836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.265856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.265862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.265867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.265882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.275800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.275905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.275921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.275926] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.275930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.275943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.285834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.285944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.285957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.285962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.285966] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.285978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.295888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.296033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.296046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.560 [2024-05-15 10:30:50.296055] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.560 [2024-05-15 10:30:50.296059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.560 [2024-05-15 10:30:50.296071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.560 qpair failed and we were unable to recover it. 00:37:04.560 [2024-05-15 10:30:50.305863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.560 [2024-05-15 10:30:50.305965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.560 [2024-05-15 10:30:50.305979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.561 [2024-05-15 10:30:50.305984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.561 [2024-05-15 10:30:50.305988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.561 [2024-05-15 10:30:50.306000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.561 qpair failed and we were unable to recover it. 00:37:04.561 [2024-05-15 10:30:50.315880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.561 [2024-05-15 10:30:50.315993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.561 [2024-05-15 10:30:50.316013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.561 [2024-05-15 10:30:50.316019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.561 [2024-05-15 10:30:50.316024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.561 [2024-05-15 10:30:50.316040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.561 qpair failed and we were unable to recover it. 00:37:04.561 [2024-05-15 10:30:50.325940] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.561 [2024-05-15 10:30:50.326046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.561 [2024-05-15 10:30:50.326061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.561 [2024-05-15 10:30:50.326067] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.561 [2024-05-15 10:30:50.326071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.561 [2024-05-15 10:30:50.326084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.561 qpair failed and we were unable to recover it. 00:37:04.561 [2024-05-15 10:30:50.335969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.561 [2024-05-15 10:30:50.336080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.561 [2024-05-15 10:30:50.336093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.561 [2024-05-15 10:30:50.336099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.561 [2024-05-15 10:30:50.336103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.561 [2024-05-15 10:30:50.336115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.561 qpair failed and we were unable to recover it. 00:37:04.561 [2024-05-15 10:30:50.345974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.561 [2024-05-15 10:30:50.346087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.561 [2024-05-15 10:30:50.346107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.561 [2024-05-15 10:30:50.346113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.561 [2024-05-15 10:30:50.346118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.561 [2024-05-15 10:30:50.346133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.561 qpair failed and we were unable to recover it. 00:37:04.824 [2024-05-15 10:30:50.356045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.824 [2024-05-15 10:30:50.356157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.824 [2024-05-15 10:30:50.356176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.824 [2024-05-15 10:30:50.356182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.824 [2024-05-15 10:30:50.356187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.824 [2024-05-15 10:30:50.356203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.824 qpair failed and we were unable to recover it. 00:37:04.824 [2024-05-15 10:30:50.365975] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.824 [2024-05-15 10:30:50.366109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.824 [2024-05-15 10:30:50.366128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.824 [2024-05-15 10:30:50.366134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.824 [2024-05-15 10:30:50.366139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.824 [2024-05-15 10:30:50.366155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.824 qpair failed and we were unable to recover it. 00:37:04.824 [2024-05-15 10:30:50.376059] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.824 [2024-05-15 10:30:50.376174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.824 [2024-05-15 10:30:50.376193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.824 [2024-05-15 10:30:50.376199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.824 [2024-05-15 10:30:50.376204] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.824 [2024-05-15 10:30:50.376220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.824 qpair failed and we were unable to recover it. 00:37:04.824 [2024-05-15 10:30:50.386108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.824 [2024-05-15 10:30:50.386213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.824 [2024-05-15 10:30:50.386235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.824 [2024-05-15 10:30:50.386240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.824 [2024-05-15 10:30:50.386244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.824 [2024-05-15 10:30:50.386258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.824 qpair failed and we were unable to recover it. 00:37:04.824 [2024-05-15 10:30:50.396024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.824 [2024-05-15 10:30:50.396129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.824 [2024-05-15 10:30:50.396143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.824 [2024-05-15 10:30:50.396148] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.824 [2024-05-15 10:30:50.396152] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.824 [2024-05-15 10:30:50.396165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.824 qpair failed and we were unable to recover it. 00:37:04.824 [2024-05-15 10:30:50.406195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.824 [2024-05-15 10:30:50.406307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.824 [2024-05-15 10:30:50.406321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.406327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.406331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.406343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.416221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.416362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.416375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.416380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.416384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.416396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.426120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.426232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.426246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.426251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.426255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.426270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.436139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.436243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.436256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.436261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.436265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.436277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.446316] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.446425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.446438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.446443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.446448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.446460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.456326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.456435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.456448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.456453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.456457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.456470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.466364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.466465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.466479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.466484] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.466488] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.466501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.476353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.476459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.476475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.476480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.476484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.476497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.486457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.486568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.486582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.486587] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.486591] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.486603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.496435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.496541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.496554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.496559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.496564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.496576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.506485] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.506625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.506638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.506644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.506648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.506660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.516474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.516623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.516637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.516642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.516649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.516662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.526547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.526650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.526664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.526669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.526673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.526685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.536600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.536748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.536768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.536773] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.536778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.536794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.825 [2024-05-15 10:30:50.546593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.825 [2024-05-15 10:30:50.546703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.825 [2024-05-15 10:30:50.546722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.825 [2024-05-15 10:30:50.546728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.825 [2024-05-15 10:30:50.546733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.825 [2024-05-15 10:30:50.546748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.825 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.556609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.556745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.556760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.556765] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.556769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.556782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.566634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.566748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.566768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.566774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.566778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.566794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.576674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.576789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.576809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.576815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.576819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.576835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.586647] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.586753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.586767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.586772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.586777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.586789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.596728] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.596840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.596860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.596865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.596870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.596886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.606633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.606737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.606751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.606757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.606765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.606778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:04.826 [2024-05-15 10:30:50.616775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:04.826 [2024-05-15 10:30:50.616878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:04.826 [2024-05-15 10:30:50.616892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:04.826 [2024-05-15 10:30:50.616897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:04.826 [2024-05-15 10:30:50.616902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:04.826 [2024-05-15 10:30:50.616914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:04.826 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.626799] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.626904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.626917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.626922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.626926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.626938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.636794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.636910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.636923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.636928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.636932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.636945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.646846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.646954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.646968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.646973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.646977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.646989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.656870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.656980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.656994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.656999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.657003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.657015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.666900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.667002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.667015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.667020] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.667024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.667035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.676985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.677101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.677115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.677120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.677124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.677136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.686897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.687031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.687045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.687050] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.687054] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.687066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.089 qpair failed and we were unable to recover it. 00:37:05.089 [2024-05-15 10:30:50.696930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.089 [2024-05-15 10:30:50.697049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.089 [2024-05-15 10:30:50.697069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.089 [2024-05-15 10:30:50.697078] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.089 [2024-05-15 10:30:50.697083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.089 [2024-05-15 10:30:50.697098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.707014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.707153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.707172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.707178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.707183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.707199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.716954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.717059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.717074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.717079] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.717084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.717096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.727112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.727225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.727238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.727244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.727248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.727261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.737128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.737237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.737250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.737255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.737260] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.737272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.747253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.747366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.747379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.747384] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.747389] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.747402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.757231] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.757346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.757359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.757364] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.757369] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.757381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.767222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.767335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.767348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.767353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.767357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.767370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.777289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.777431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.777444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.777449] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.777453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.777465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.787129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.787233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.787249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.787254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.787258] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.787270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.797311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.797433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.797446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.797451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.797456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.797468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.807326] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.807429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.807442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.807448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.807452] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.807464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.817350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.817461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.817474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.817480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.817484] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.817496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.827370] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.827528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.827541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.827546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.827551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.090 [2024-05-15 10:30:50.827567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.090 qpair failed and we were unable to recover it. 00:37:05.090 [2024-05-15 10:30:50.837295] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.090 [2024-05-15 10:30:50.837399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.090 [2024-05-15 10:30:50.837412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.090 [2024-05-15 10:30:50.837417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.090 [2024-05-15 10:30:50.837422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.091 [2024-05-15 10:30:50.837434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.091 qpair failed and we were unable to recover it. 00:37:05.091 [2024-05-15 10:30:50.847413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.091 [2024-05-15 10:30:50.847521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.091 [2024-05-15 10:30:50.847534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.091 [2024-05-15 10:30:50.847539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.091 [2024-05-15 10:30:50.847543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.091 [2024-05-15 10:30:50.847555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.091 qpair failed and we were unable to recover it. 00:37:05.091 [2024-05-15 10:30:50.857426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.091 [2024-05-15 10:30:50.857540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.091 [2024-05-15 10:30:50.857553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.091 [2024-05-15 10:30:50.857558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.091 [2024-05-15 10:30:50.857562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.091 [2024-05-15 10:30:50.857575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.091 qpair failed and we were unable to recover it. 00:37:05.091 [2024-05-15 10:30:50.867446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.091 [2024-05-15 10:30:50.867552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.091 [2024-05-15 10:30:50.867565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.091 [2024-05-15 10:30:50.867570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.091 [2024-05-15 10:30:50.867574] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.091 [2024-05-15 10:30:50.867586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.091 qpair failed and we were unable to recover it. 00:37:05.091 [2024-05-15 10:30:50.877521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.091 [2024-05-15 10:30:50.877621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.091 [2024-05-15 10:30:50.877637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.091 [2024-05-15 10:30:50.877642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.091 [2024-05-15 10:30:50.877646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.091 [2024-05-15 10:30:50.877658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.091 qpair failed and we were unable to recover it. 00:37:05.354 [2024-05-15 10:30:50.887559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.354 [2024-05-15 10:30:50.887666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.354 [2024-05-15 10:30:50.887680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.354 [2024-05-15 10:30:50.887685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.354 [2024-05-15 10:30:50.887689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.354 [2024-05-15 10:30:50.887701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.354 qpair failed and we were unable to recover it. 00:37:05.354 [2024-05-15 10:30:50.897585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.354 [2024-05-15 10:30:50.897695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.354 [2024-05-15 10:30:50.897708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.354 [2024-05-15 10:30:50.897712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.354 [2024-05-15 10:30:50.897717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.354 [2024-05-15 10:30:50.897729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.354 qpair failed and we were unable to recover it. 00:37:05.354 [2024-05-15 10:30:50.907482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.354 [2024-05-15 10:30:50.907589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.354 [2024-05-15 10:30:50.907603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.354 [2024-05-15 10:30:50.907607] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.354 [2024-05-15 10:30:50.907612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.354 [2024-05-15 10:30:50.907624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.354 qpair failed and we were unable to recover it. 00:37:05.354 [2024-05-15 10:30:50.917661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.354 [2024-05-15 10:30:50.917764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.354 [2024-05-15 10:30:50.917777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.354 [2024-05-15 10:30:50.917782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.354 [2024-05-15 10:30:50.917789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.354 [2024-05-15 10:30:50.917802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.354 qpair failed and we were unable to recover it. 00:37:05.354 [2024-05-15 10:30:50.927655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.354 [2024-05-15 10:30:50.927760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.354 [2024-05-15 10:30:50.927774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.354 [2024-05-15 10:30:50.927779] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.354 [2024-05-15 10:30:50.927783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.354 [2024-05-15 10:30:50.927795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.354 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.937661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.937809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.937824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.937829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.937833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.937846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.947689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.947801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.947821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.947827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.947831] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.947846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.957765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.957867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.957881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.957886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.957891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.957904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.967625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.967735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.967749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.967755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.967759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.967771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.977662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.977773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.977787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.977792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.977797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.977809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.987784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.987905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.987924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.987930] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.987935] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.987951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:50.997851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:50.997962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:50.997976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:50.997981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:50.997986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:50.997999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.007867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.007981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.008001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.008007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:51.008016] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:51.008031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.017908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.018022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.018042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.018048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:51.018052] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:51.018067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.027908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.028018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.028038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.028044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:51.028049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:51.028064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.037890] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.038027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.038047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.038052] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:51.038057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:51.038073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.047953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.048074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.048094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.048100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:51.048105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:51.048120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.058005] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.058119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.058139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.058145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.355 [2024-05-15 10:30:51.058149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.355 [2024-05-15 10:30:51.058165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.355 qpair failed and we were unable to recover it. 00:37:05.355 [2024-05-15 10:30:51.067990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.355 [2024-05-15 10:30:51.068094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.355 [2024-05-15 10:30:51.068108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.355 [2024-05-15 10:30:51.068114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.068118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.068131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.077938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.078051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.078071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.078076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.078081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.078096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.088054] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.088164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.088179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.088184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.088189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.088202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.098105] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.098251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.098264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.098273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.098277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.098290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.108121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.108226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.108239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.108244] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.108248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.108261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.118159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.118261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.118274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.118279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.118283] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.118302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.128193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.128306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.128319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.128324] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.128329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.128341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.356 [2024-05-15 10:30:51.138053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.356 [2024-05-15 10:30:51.138171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.356 [2024-05-15 10:30:51.138184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.356 [2024-05-15 10:30:51.138189] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.356 [2024-05-15 10:30:51.138193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.356 [2024-05-15 10:30:51.138205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.356 qpair failed and we were unable to recover it. 00:37:05.619 [2024-05-15 10:30:51.148260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.619 [2024-05-15 10:30:51.148371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.619 [2024-05-15 10:30:51.148384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.619 [2024-05-15 10:30:51.148389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.619 [2024-05-15 10:30:51.148394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.619 [2024-05-15 10:30:51.148406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.619 qpair failed and we were unable to recover it. 00:37:05.619 [2024-05-15 10:30:51.158262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.619 [2024-05-15 10:30:51.158370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.619 [2024-05-15 10:30:51.158383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.619 [2024-05-15 10:30:51.158388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.619 [2024-05-15 10:30:51.158393] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.619 [2024-05-15 10:30:51.158405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.619 qpair failed and we were unable to recover it. 00:37:05.619 [2024-05-15 10:30:51.168208] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.619 [2024-05-15 10:30:51.168313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.619 [2024-05-15 10:30:51.168326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.619 [2024-05-15 10:30:51.168330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.168335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.168345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.178269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.178382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.178395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.178400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.178404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.178416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.188325] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.188428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.188444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.188450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.188454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.188466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.198213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.198314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.198327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.198332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.198336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.198349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.208390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.208494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.208507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.208512] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.208517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.208529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.218398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.218503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.218516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.218521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.218525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.218537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.228451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.228552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.228566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.228570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.228575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.228590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.238475] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.238605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.238618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.238624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.238628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.238640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.248515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.248623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.248636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.248641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.248645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.248657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.258531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.258649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.258662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.258667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.258671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.258683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.268558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.268660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.268673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.268678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.268682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.268694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.278559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.278663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.278680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.278685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.278689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.278701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.288685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.288832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.288845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.288850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.288854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.288865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.298848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.298979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.298998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.299004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.299008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.299024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.308621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.308730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.308749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.308755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.308759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.308774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.318646] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.318747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.318766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.318772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.318777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.318796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.328710] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.328821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.328840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.328846] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.328851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.328866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.338672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.338799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.338819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.338824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.338829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.338845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.348752] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.348857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.348871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.348876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.348880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.348893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.358742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.358897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.358911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.358916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.358920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.358933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.368858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.368979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.368998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.369004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.369009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.369024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.378793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.378905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.378919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.378924] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.378929] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.378942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.388831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.388934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.388954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.388959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.388964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.620 [2024-05-15 10:30:51.388980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.620 qpair failed and we were unable to recover it. 00:37:05.620 [2024-05-15 10:30:51.398876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.620 [2024-05-15 10:30:51.398981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.620 [2024-05-15 10:30:51.399000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.620 [2024-05-15 10:30:51.399006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.620 [2024-05-15 10:30:51.399011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.621 [2024-05-15 10:30:51.399026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.621 qpair failed and we were unable to recover it. 00:37:05.621 [2024-05-15 10:30:51.408930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.621 [2024-05-15 10:30:51.409088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.621 [2024-05-15 10:30:51.409107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.621 [2024-05-15 10:30:51.409113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.621 [2024-05-15 10:30:51.409121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.621 [2024-05-15 10:30:51.409137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.621 qpair failed and we were unable to recover it. 00:37:05.887 [2024-05-15 10:30:51.418918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.887 [2024-05-15 10:30:51.419025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.887 [2024-05-15 10:30:51.419045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.887 [2024-05-15 10:30:51.419051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.887 [2024-05-15 10:30:51.419055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.887 [2024-05-15 10:30:51.419071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.887 qpair failed and we were unable to recover it. 00:37:05.887 [2024-05-15 10:30:51.428946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.887 [2024-05-15 10:30:51.429052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.887 [2024-05-15 10:30:51.429071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.887 [2024-05-15 10:30:51.429077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.887 [2024-05-15 10:30:51.429081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.887 [2024-05-15 10:30:51.429097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.887 qpair failed and we were unable to recover it. 00:37:05.887 [2024-05-15 10:30:51.438972] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.887 [2024-05-15 10:30:51.439093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.887 [2024-05-15 10:30:51.439113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.887 [2024-05-15 10:30:51.439119] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.887 [2024-05-15 10:30:51.439124] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.887 [2024-05-15 10:30:51.439139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.887 qpair failed and we were unable to recover it. 00:37:05.887 [2024-05-15 10:30:51.448930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.887 [2024-05-15 10:30:51.449034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.887 [2024-05-15 10:30:51.449048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.887 [2024-05-15 10:30:51.449053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.887 [2024-05-15 10:30:51.449058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.887 [2024-05-15 10:30:51.449071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.887 qpair failed and we were unable to recover it. 00:37:05.887 [2024-05-15 10:30:51.459035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.887 [2024-05-15 10:30:51.459142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.887 [2024-05-15 10:30:51.459162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.887 [2024-05-15 10:30:51.459168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.887 [2024-05-15 10:30:51.459173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.887 [2024-05-15 10:30:51.459188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.887 qpair failed and we were unable to recover it. 00:37:05.887 [2024-05-15 10:30:51.469064] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.887 [2024-05-15 10:30:51.469167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.887 [2024-05-15 10:30:51.469181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.469186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.469191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.469203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.479092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.479190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.479204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.479209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.479213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.479225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.489168] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.489272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.489286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.489296] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.489300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.489312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.499122] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.499235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.499248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.499257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.499262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.499274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.509148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.509257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.509270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.509275] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.509279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.509297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.519252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.519353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.519367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.519372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.519376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.519388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.529198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.529308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.529321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.529326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.529331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.529343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.539270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.539375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.539389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.539394] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.539398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.539411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.549293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.549395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.549408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.549413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.549418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.549430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.559278] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.559381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.559395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.559401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.559405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.559417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.569412] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.569515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.569529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.569534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.569538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.569550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.579389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.579498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.579511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.579516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.579520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.579533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.589297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.589419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.589435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.589440] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.589444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.589455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.599404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.599507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.888 [2024-05-15 10:30:51.599520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.888 [2024-05-15 10:30:51.599525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.888 [2024-05-15 10:30:51.599529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.888 [2024-05-15 10:30:51.599541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.888 qpair failed and we were unable to recover it. 00:37:05.888 [2024-05-15 10:30:51.609470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.888 [2024-05-15 10:30:51.609573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.609586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.609592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.609596] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.609608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:05.889 [2024-05-15 10:30:51.619512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.889 [2024-05-15 10:30:51.619618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.619631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.619636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.619640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.619652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:05.889 [2024-05-15 10:30:51.629516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.889 [2024-05-15 10:30:51.629615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.629628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.629633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.629637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.629649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:05.889 [2024-05-15 10:30:51.639547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.889 [2024-05-15 10:30:51.639656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.639669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.639674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.639679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.639690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:05.889 [2024-05-15 10:30:51.649614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.889 [2024-05-15 10:30:51.649723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.649736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.649741] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.649745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.649757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:05.889 [2024-05-15 10:30:51.659789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.889 [2024-05-15 10:30:51.659892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.659905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.659910] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.659914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.659926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:05.889 [2024-05-15 10:30:51.669617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:05.889 [2024-05-15 10:30:51.669722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:05.889 [2024-05-15 10:30:51.669741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:05.889 [2024-05-15 10:30:51.669747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:05.889 [2024-05-15 10:30:51.669752] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:05.889 [2024-05-15 10:30:51.669767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:05.889 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.679648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.679746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.679764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.679770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.184 [2024-05-15 10:30:51.679774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.184 [2024-05-15 10:30:51.679787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.184 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.689739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.689844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.689858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.689863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.184 [2024-05-15 10:30:51.689867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.184 [2024-05-15 10:30:51.689880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.184 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.699669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.699772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.699785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.699790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.184 [2024-05-15 10:30:51.699794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.184 [2024-05-15 10:30:51.699806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.184 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.709718] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.709823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.709836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.709841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.184 [2024-05-15 10:30:51.709845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.184 [2024-05-15 10:30:51.709858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.184 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.719788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.719889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.719902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.719907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.184 [2024-05-15 10:30:51.719911] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.184 [2024-05-15 10:30:51.719927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.184 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.729836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.729941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.729954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.729959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.184 [2024-05-15 10:30:51.729963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.184 [2024-05-15 10:30:51.729976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.184 qpair failed and we were unable to recover it. 00:37:06.184 [2024-05-15 10:30:51.739835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.184 [2024-05-15 10:30:51.739940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.184 [2024-05-15 10:30:51.739953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.184 [2024-05-15 10:30:51.739958] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.739962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.739975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.749784] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.749880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.749893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.749898] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.749902] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.749914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.759904] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.760004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.760017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.760022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.760027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.760039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.769990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.770137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.770153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.770158] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.770162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.770174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.779816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.779923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.779936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.779941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.779946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.779957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.789938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.790039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.790053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.790057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.790062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.790074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.799971] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.800079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.800093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.800098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.800103] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.800115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.810042] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.810146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.810160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.810165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.810175] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.810188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.820015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.820120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.820134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.820139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.820143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.820155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.830056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.830154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.830168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.830173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.830177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.830188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.840098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.840200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.840212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.840217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.840222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.840233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.850156] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.850258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.850271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.850276] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.850280] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.850297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.860150] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.860263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.860276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.860281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.860285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.185 [2024-05-15 10:30:51.860304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.185 qpair failed and we were unable to recover it. 00:37:06.185 [2024-05-15 10:30:51.870088] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.185 [2024-05-15 10:30:51.870396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.185 [2024-05-15 10:30:51.870411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.185 [2024-05-15 10:30:51.870415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.185 [2024-05-15 10:30:51.870420] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.870431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.880183] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.880277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.880295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.880300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.880305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.880317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.890270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.890379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.890392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.890397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.890401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.890413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.900157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.900262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.900276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.900284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.900288] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.900306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.910297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.910395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.910409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.910414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.910418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.910430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.920195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.920296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.920309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.920314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.920318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.920330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.930629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.930734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.930747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.930752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.930756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.930768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.940314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.940445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.940458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.940463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.940467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.940480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.950405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.950518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.950532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.950536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.950541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.950553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.960430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.960543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.960556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.960561] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.960565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.960578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.186 [2024-05-15 10:30:51.970545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.186 [2024-05-15 10:30:51.970652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.186 [2024-05-15 10:30:51.970665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.186 [2024-05-15 10:30:51.970670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.186 [2024-05-15 10:30:51.970674] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.186 [2024-05-15 10:30:51.970686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.186 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:51.980496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:51.980600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:51.980613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:51.980618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:51.980622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:51.980634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:51.990530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:51.990629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:51.990642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:51.990650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:51.990654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:51.990666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.000573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.000671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.000684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.000689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.000694] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.000705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.010631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.010738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.010751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.010757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.010761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.010773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.020588] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.020693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.020706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.020711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.020715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.020727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.030613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.030719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.030732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.030737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.030741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.030753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.040539] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.040636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.040649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.040653] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.040658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.040670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.050592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.050700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.050713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.050718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.050723] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.050735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.060720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.450 [2024-05-15 10:30:52.060822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.450 [2024-05-15 10:30:52.060835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.450 [2024-05-15 10:30:52.060840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.450 [2024-05-15 10:30:52.060844] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.450 [2024-05-15 10:30:52.060856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.450 qpair failed and we were unable to recover it. 00:37:06.450 [2024-05-15 10:30:52.070716] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.070828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.070847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.070853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.070858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.070873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.080835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.080936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.080954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.080959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.080963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.080976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.090818] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.090925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.090945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.090951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.090955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.090970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.100802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.100914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.100934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.100940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.100944] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.100960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.110812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.110914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.110934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.110940] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.110945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.110960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.120897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.121001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.121021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.121027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.121032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.121051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.130944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.131055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.131075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.131081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.131086] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.131101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.140804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.140922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.140942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.140948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.140952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.140968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.150916] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.151014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.151028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.151032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.151036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.151048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.160984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.161090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.161110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.161116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.161121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.161136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.171070] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.171174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.171192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.171197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.171202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.171215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.181040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.451 [2024-05-15 10:30:52.181146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.451 [2024-05-15 10:30:52.181160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.451 [2024-05-15 10:30:52.181165] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.451 [2024-05-15 10:30:52.181169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.451 [2024-05-15 10:30:52.181182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.451 qpair failed and we were unable to recover it. 00:37:06.451 [2024-05-15 10:30:52.191051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.452 [2024-05-15 10:30:52.191149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.452 [2024-05-15 10:30:52.191162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.452 [2024-05-15 10:30:52.191168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.452 [2024-05-15 10:30:52.191172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.452 [2024-05-15 10:30:52.191184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.452 qpair failed and we were unable to recover it. 00:37:06.452 [2024-05-15 10:30:52.201092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.452 [2024-05-15 10:30:52.201191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.452 [2024-05-15 10:30:52.201204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.452 [2024-05-15 10:30:52.201209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.452 [2024-05-15 10:30:52.201213] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.452 [2024-05-15 10:30:52.201225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.452 qpair failed and we were unable to recover it. 00:37:06.452 [2024-05-15 10:30:52.211164] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.452 [2024-05-15 10:30:52.211270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.452 [2024-05-15 10:30:52.211283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.452 [2024-05-15 10:30:52.211289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.452 [2024-05-15 10:30:52.211302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.452 [2024-05-15 10:30:52.211314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.452 qpair failed and we were unable to recover it. 00:37:06.452 [2024-05-15 10:30:52.221167] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.452 [2024-05-15 10:30:52.221307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.452 [2024-05-15 10:30:52.221321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.452 [2024-05-15 10:30:52.221326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.452 [2024-05-15 10:30:52.221331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.452 [2024-05-15 10:30:52.221344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.452 qpair failed and we were unable to recover it. 00:37:06.452 [2024-05-15 10:30:52.231079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.452 [2024-05-15 10:30:52.231176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.452 [2024-05-15 10:30:52.231189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.452 [2024-05-15 10:30:52.231194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.452 [2024-05-15 10:30:52.231198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.452 [2024-05-15 10:30:52.231210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.452 qpair failed and we were unable to recover it. 00:37:06.452 [2024-05-15 10:30:52.241210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.452 [2024-05-15 10:30:52.241318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.452 [2024-05-15 10:30:52.241331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.452 [2024-05-15 10:30:52.241336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.452 [2024-05-15 10:30:52.241340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.452 [2024-05-15 10:30:52.241352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.452 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.251160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.251284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.251302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.251308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.251312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.251324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.261270] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.261380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.261394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.261399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.261403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.261416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.271282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.271383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.271396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.271401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.271405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.271417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.281323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.281419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.281432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.281437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.281441] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.281454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.291336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.291432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.291446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.291451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.291455] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.291467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.301369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.301474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.301487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.301492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.301499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.301511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.311403] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.311502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.311515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.311520] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.311525] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.311537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.321424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.321518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.321532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.321536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.321540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.321553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.331451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.331546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.331559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.331564] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.331568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.331581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.341488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.716 [2024-05-15 10:30:52.341592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.716 [2024-05-15 10:30:52.341605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.716 [2024-05-15 10:30:52.341610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.716 [2024-05-15 10:30:52.341614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.716 [2024-05-15 10:30:52.341626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.716 qpair failed and we were unable to recover it. 00:37:06.716 [2024-05-15 10:30:52.351526] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.351623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.351636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.351641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.351645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.351657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.361549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.361649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.361664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.361669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.361673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.361686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.371543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.371640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.371651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.371655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.371659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.371669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.381599] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.381704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.381717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.381723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.381727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.381739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.391628] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.391765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.391779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.391788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.391792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.391805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.401669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.401974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.401988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.401992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.401996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.402007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.411689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.411788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.411801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.411806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.411810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.411822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.421693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.421803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.421823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.421829] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.421834] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.421849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.431662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.431762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.431777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.431782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.431786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.431800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.441756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.441851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.441865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.441870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.441874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.441886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.451794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.451893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.451906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.451911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.451915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.451928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.461851] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.461960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.461974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.461979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.461983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.461994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.471796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.471897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.471910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.471915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.471919] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.471930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.481836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.717 [2024-05-15 10:30:52.481939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.717 [2024-05-15 10:30:52.481961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.717 [2024-05-15 10:30:52.481967] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.717 [2024-05-15 10:30:52.481972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.717 [2024-05-15 10:30:52.481987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.717 qpair failed and we were unable to recover it. 00:37:06.717 [2024-05-15 10:30:52.491918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.718 [2024-05-15 10:30:52.492018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.718 [2024-05-15 10:30:52.492032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.718 [2024-05-15 10:30:52.492037] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.718 [2024-05-15 10:30:52.492041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.718 [2024-05-15 10:30:52.492054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.718 qpair failed and we were unable to recover it. 00:37:06.718 [2024-05-15 10:30:52.501888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.718 [2024-05-15 10:30:52.501996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.718 [2024-05-15 10:30:52.502016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.718 [2024-05-15 10:30:52.502021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.718 [2024-05-15 10:30:52.502026] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.718 [2024-05-15 10:30:52.502042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.718 qpair failed and we were unable to recover it. 00:37:06.981 [2024-05-15 10:30:52.511925] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.981 [2024-05-15 10:30:52.512021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.981 [2024-05-15 10:30:52.512035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.981 [2024-05-15 10:30:52.512041] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.981 [2024-05-15 10:30:52.512045] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.981 [2024-05-15 10:30:52.512058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.981 qpair failed and we were unable to recover it. 00:37:06.981 [2024-05-15 10:30:52.521979] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.981 [2024-05-15 10:30:52.522077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.981 [2024-05-15 10:30:52.522091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.981 [2024-05-15 10:30:52.522096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.981 [2024-05-15 10:30:52.522100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.981 [2024-05-15 10:30:52.522117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.981 qpair failed and we were unable to recover it. 00:37:06.981 [2024-05-15 10:30:52.531863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.981 [2024-05-15 10:30:52.531961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.981 [2024-05-15 10:30:52.531975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.981 [2024-05-15 10:30:52.531980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.981 [2024-05-15 10:30:52.531984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.981 [2024-05-15 10:30:52.531996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.981 qpair failed and we were unable to recover it. 00:37:06.981 [2024-05-15 10:30:52.541880] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.981 [2024-05-15 10:30:52.541983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.541996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.542001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.542005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.542018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.552021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.552121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.552140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.552146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.552151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.552166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.562071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.562177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.562192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.562197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.562202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.562215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.572125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.572225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.572245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.572250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.572254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.572267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.582111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.582261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.582275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.582280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.582285] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.582303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.592153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.592248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.592261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.592266] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.592270] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.592282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.602066] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.602163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.602176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.602181] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.602185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.602198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.612200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.612301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.612315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.612320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.612327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.612340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.622230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.622336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.622350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.622355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.622359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.622371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.632262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.632370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.632383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.632388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.632392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.632405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.642283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.642380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.642394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.642399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.642403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.642416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.652330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.652455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.652469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.652474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.652478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.652490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.662349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.662455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.662468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.662473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.662478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.662490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.672340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.982 [2024-05-15 10:30:52.672437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.982 [2024-05-15 10:30:52.672450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.982 [2024-05-15 10:30:52.672455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.982 [2024-05-15 10:30:52.672459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.982 [2024-05-15 10:30:52.672472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.982 qpair failed and we were unable to recover it. 00:37:06.982 [2024-05-15 10:30:52.682409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.682508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.682521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.682526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.682530] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.682543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.692424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.692525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.692538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.692543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.692547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.692559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.702440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.702542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.702555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.702560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.702568] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.702581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.712491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.712587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.712600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.712605] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.712609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.712620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.722565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.722697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.722711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.722716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.722720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.722732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.732418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.732523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.732537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.732542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.732546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.732558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.742434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.742538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.742551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.742556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.742560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.742572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.752567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.752662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.752676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.752681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.752685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.752697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.762619] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.762713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.762726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.762731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.762736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.762748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:06.983 [2024-05-15 10:30:52.772611] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:06.983 [2024-05-15 10:30:52.772709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:06.983 [2024-05-15 10:30:52.772722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:06.983 [2024-05-15 10:30:52.772727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:06.983 [2024-05-15 10:30:52.772731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:06.983 [2024-05-15 10:30:52.772743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:06.983 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.782668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.247 [2024-05-15 10:30:52.782772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.247 [2024-05-15 10:30:52.782785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.247 [2024-05-15 10:30:52.782790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.247 [2024-05-15 10:30:52.782794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.247 [2024-05-15 10:30:52.782806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.247 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.792654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.247 [2024-05-15 10:30:52.792755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.247 [2024-05-15 10:30:52.792775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.247 [2024-05-15 10:30:52.792785] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.247 [2024-05-15 10:30:52.792789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.247 [2024-05-15 10:30:52.792805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.247 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.802719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.247 [2024-05-15 10:30:52.802823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.247 [2024-05-15 10:30:52.802843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.247 [2024-05-15 10:30:52.802849] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.247 [2024-05-15 10:30:52.802853] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.247 [2024-05-15 10:30:52.802869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.247 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.812766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.247 [2024-05-15 10:30:52.812869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.247 [2024-05-15 10:30:52.812888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.247 [2024-05-15 10:30:52.812894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.247 [2024-05-15 10:30:52.812899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.247 [2024-05-15 10:30:52.812915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.247 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.822765] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.247 [2024-05-15 10:30:52.822869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.247 [2024-05-15 10:30:52.822888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.247 [2024-05-15 10:30:52.822894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.247 [2024-05-15 10:30:52.822899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.247 [2024-05-15 10:30:52.822915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.247 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.832806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.247 [2024-05-15 10:30:52.832915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.247 [2024-05-15 10:30:52.832935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.247 [2024-05-15 10:30:52.832941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.247 [2024-05-15 10:30:52.832945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.247 [2024-05-15 10:30:52.832961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.247 qpair failed and we were unable to recover it. 00:37:07.247 [2024-05-15 10:30:52.842827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.842924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.842943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.842949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.842953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.842969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.852862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.852966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.852985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.852992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.852997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.853012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.862932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.863043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.863063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.863068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.863073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.863089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.872891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.872990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.873009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.873015] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.873020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.873034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.882881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.882991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.883013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.883019] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.883024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.883040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.892957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.893061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.893081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.893087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.893092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.893107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.902994] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.903097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.903117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.903123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.903127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.903144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.912931] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.913076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.913096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.913102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.913106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.913122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.923044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.923168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.923187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.923193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.923198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.923217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.933098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.933230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.933245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.933250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.933254] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.933267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.943123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.943221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.943234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.943239] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.943243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.943255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.953135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.953238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.953251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.953257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.953261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.953273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.963174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.963272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.963286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.963297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.963303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.248 [2024-05-15 10:30:52.963318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.248 qpair failed and we were unable to recover it. 00:37:07.248 [2024-05-15 10:30:52.973075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.248 [2024-05-15 10:30:52.973190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.248 [2024-05-15 10:30:52.973206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.248 [2024-05-15 10:30:52.973211] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.248 [2024-05-15 10:30:52.973215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:52.973228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.249 [2024-05-15 10:30:52.983223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.249 [2024-05-15 10:30:52.983333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.249 [2024-05-15 10:30:52.983347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.249 [2024-05-15 10:30:52.983352] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.249 [2024-05-15 10:30:52.983356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:52.983368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.249 [2024-05-15 10:30:52.993216] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.249 [2024-05-15 10:30:52.993318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.249 [2024-05-15 10:30:52.993331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.249 [2024-05-15 10:30:52.993336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.249 [2024-05-15 10:30:52.993340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:52.993353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.249 [2024-05-15 10:30:53.003253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.249 [2024-05-15 10:30:53.003350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.249 [2024-05-15 10:30:53.003364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.249 [2024-05-15 10:30:53.003369] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.249 [2024-05-15 10:30:53.003373] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:53.003386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.249 [2024-05-15 10:30:53.013331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.249 [2024-05-15 10:30:53.013427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.249 [2024-05-15 10:30:53.013440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.249 [2024-05-15 10:30:53.013445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.249 [2024-05-15 10:30:53.013450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:53.013465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.249 [2024-05-15 10:30:53.023389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.249 [2024-05-15 10:30:53.023540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.249 [2024-05-15 10:30:53.023553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.249 [2024-05-15 10:30:53.023558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.249 [2024-05-15 10:30:53.023562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:53.023575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.249 [2024-05-15 10:30:53.033354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.249 [2024-05-15 10:30:53.033482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.249 [2024-05-15 10:30:53.033495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.249 [2024-05-15 10:30:53.033500] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.249 [2024-05-15 10:30:53.033504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.249 [2024-05-15 10:30:53.033516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.249 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.043453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.513 [2024-05-15 10:30:53.043565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.513 [2024-05-15 10:30:53.043578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.513 [2024-05-15 10:30:53.043583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.513 [2024-05-15 10:30:53.043587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.513 [2024-05-15 10:30:53.043600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.513 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.053389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.513 [2024-05-15 10:30:53.053485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.513 [2024-05-15 10:30:53.053498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.513 [2024-05-15 10:30:53.053503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.513 [2024-05-15 10:30:53.053508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.513 [2024-05-15 10:30:53.053520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.513 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.063442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.513 [2024-05-15 10:30:53.063588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.513 [2024-05-15 10:30:53.063601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.513 [2024-05-15 10:30:53.063606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.513 [2024-05-15 10:30:53.063610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.513 [2024-05-15 10:30:53.063623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.513 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.073409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.513 [2024-05-15 10:30:53.073511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.513 [2024-05-15 10:30:53.073524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.513 [2024-05-15 10:30:53.073529] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.513 [2024-05-15 10:30:53.073534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.513 [2024-05-15 10:30:53.073546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.513 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.083495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.513 [2024-05-15 10:30:53.083591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.513 [2024-05-15 10:30:53.083604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.513 [2024-05-15 10:30:53.083609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.513 [2024-05-15 10:30:53.083613] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.513 [2024-05-15 10:30:53.083625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.513 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.093525] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.513 [2024-05-15 10:30:53.093641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.513 [2024-05-15 10:30:53.093655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.513 [2024-05-15 10:30:53.093660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.513 [2024-05-15 10:30:53.093664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.513 [2024-05-15 10:30:53.093676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.513 qpair failed and we were unable to recover it. 00:37:07.513 [2024-05-15 10:30:53.103537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.103639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.103652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.103657] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.103664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.103676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.113565] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.113671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.113684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.113688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.113693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.113704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.123616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.123716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.123728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.123733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.123737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.123749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.133601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.133698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.133707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.133713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.133717] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.133726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.143663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.143762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.143775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.143780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.143784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.143796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.153635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.153740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.153760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.153766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.153770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.153786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.163726] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.163832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.163852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.163857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.163862] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.163878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.173747] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.173842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.173857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.173862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.173866] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.173879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.183748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.183851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.183871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.183876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.183881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.183896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.193832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.193971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.193991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.194000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.194005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.194021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.203805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.203910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.203930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.203936] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.203940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.203956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.213835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.213940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.213959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.213965] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.213970] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.213985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.223846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.223956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.223976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.223982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.223986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.224001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.514 [2024-05-15 10:30:53.233899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.514 [2024-05-15 10:30:53.234004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.514 [2024-05-15 10:30:53.234023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.514 [2024-05-15 10:30:53.234029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.514 [2024-05-15 10:30:53.234033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.514 [2024-05-15 10:30:53.234049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.514 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.243967] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.244072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.244091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.244097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.244101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.244117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.253963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.254070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.254089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.254095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.254099] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.254115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.263843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.263943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.263958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.263963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.263967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.263980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.273987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.274093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.274112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.274118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.274122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.274138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.283926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.284031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.284054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.284061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.284065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.284081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.294084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.294188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.294208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.294214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.294218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.294234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.515 [2024-05-15 10:30:53.304091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.515 [2024-05-15 10:30:53.304193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.515 [2024-05-15 10:30:53.304207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.515 [2024-05-15 10:30:53.304212] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.515 [2024-05-15 10:30:53.304216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.515 [2024-05-15 10:30:53.304229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.515 qpair failed and we were unable to recover it. 00:37:07.808 [2024-05-15 10:30:53.313970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.808 [2024-05-15 10:30:53.314074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.808 [2024-05-15 10:30:53.314088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.808 [2024-05-15 10:30:53.314093] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.808 [2024-05-15 10:30:53.314097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.808 [2024-05-15 10:30:53.314109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.808 qpair failed and we were unable to recover it. 00:37:07.808 [2024-05-15 10:30:53.324176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.808 [2024-05-15 10:30:53.324279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.808 [2024-05-15 10:30:53.324297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.808 [2024-05-15 10:30:53.324302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.808 [2024-05-15 10:30:53.324306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.808 [2024-05-15 10:30:53.324319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.808 qpair failed and we were unable to recover it. 00:37:07.808 [2024-05-15 10:30:53.334173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.808 [2024-05-15 10:30:53.334278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.808 [2024-05-15 10:30:53.334297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.808 [2024-05-15 10:30:53.334302] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.808 [2024-05-15 10:30:53.334306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.808 [2024-05-15 10:30:53.334318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.808 qpair failed and we were unable to recover it. 00:37:07.808 [2024-05-15 10:30:53.344173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.808 [2024-05-15 10:30:53.344275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.808 [2024-05-15 10:30:53.344288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.808 [2024-05-15 10:30:53.344298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.808 [2024-05-15 10:30:53.344302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.808 [2024-05-15 10:30:53.344315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.808 qpair failed and we were unable to recover it. 00:37:07.808 [2024-05-15 10:30:53.354275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.808 [2024-05-15 10:30:53.354378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.808 [2024-05-15 10:30:53.354391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.808 [2024-05-15 10:30:53.354396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.354400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.354413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.364284] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.364378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.364391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.364396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.364400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.364412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.374289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.374399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.374415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.374420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.374424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.374436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.384300] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.384405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.384418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.384423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.384427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.384439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.394302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.394401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.394415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.394420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.394424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.394436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.404337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.404435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.404448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.404453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.404457] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.404469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.414364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.414466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.414479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.414485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.414489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.414504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.424299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.424405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.424418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.424423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.424427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.424439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.434324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.434449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.434462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.434467] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.434471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.434483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.444446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.444553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.444566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.444571] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.444575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.444588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.454446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.454544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.454557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.454562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.454566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.454578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.464512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.464618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.464634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.464639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.464643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.464655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.474521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.809 [2024-05-15 10:30:53.474623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.809 [2024-05-15 10:30:53.474636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.809 [2024-05-15 10:30:53.474641] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.809 [2024-05-15 10:30:53.474645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.809 [2024-05-15 10:30:53.474657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.809 qpair failed and we were unable to recover it. 00:37:07.809 [2024-05-15 10:30:53.484530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.484626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.484639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.484644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.484648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.484660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.494460] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.494555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.494568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.494573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.494578] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.494590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.504486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.504585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.504598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.504603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.504610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.504623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.514635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.514733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.514747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.514752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.514756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.514768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.524634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.524751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.524770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.524776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.524781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.524796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.534567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.534709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.534724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.534730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.534734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.534747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.544758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.544863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.544877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.544882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.544886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.544899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.554719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.554817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.554830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.554836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.554840] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.554852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:07.810 [2024-05-15 10:30:53.564811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:07.810 [2024-05-15 10:30:53.564909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:07.810 [2024-05-15 10:30:53.564922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:07.810 [2024-05-15 10:30:53.564927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:07.810 [2024-05-15 10:30:53.564931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:07.810 [2024-05-15 10:30:53.564943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:07.810 qpair failed and we were unable to recover it. 00:37:08.074 [2024-05-15 10:30:53.574800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.074 [2024-05-15 10:30:53.574901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.074 [2024-05-15 10:30:53.574914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.074 [2024-05-15 10:30:53.574919] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.074 [2024-05-15 10:30:53.574923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.074 [2024-05-15 10:30:53.574936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.074 qpair failed and we were unable to recover it. 00:37:08.074 [2024-05-15 10:30:53.584840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.074 [2024-05-15 10:30:53.584936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.074 [2024-05-15 10:30:53.584950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.074 [2024-05-15 10:30:53.584954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.074 [2024-05-15 10:30:53.584959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.074 [2024-05-15 10:30:53.584971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.074 qpair failed and we were unable to recover it. 00:37:08.074 [2024-05-15 10:30:53.594874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.074 [2024-05-15 10:30:53.594981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.074 [2024-05-15 10:30:53.594994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.074 [2024-05-15 10:30:53.595003] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.074 [2024-05-15 10:30:53.595007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.074 [2024-05-15 10:30:53.595019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.074 qpair failed and we were unable to recover it. 00:37:08.074 [2024-05-15 10:30:53.604830] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.074 [2024-05-15 10:30:53.604929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.074 [2024-05-15 10:30:53.604942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.074 [2024-05-15 10:30:53.604947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.074 [2024-05-15 10:30:53.604952] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.074 [2024-05-15 10:30:53.604963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.074 qpair failed and we were unable to recover it. 00:37:08.074 [2024-05-15 10:30:53.614941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.074 [2024-05-15 10:30:53.615036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.074 [2024-05-15 10:30:53.615049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.074 [2024-05-15 10:30:53.615054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.074 [2024-05-15 10:30:53.615058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.615071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.624953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.625092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.625105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.625110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.625115] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.625127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.634950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.635059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.635079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.635085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.635089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.635105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.645013] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.645118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.645137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.645143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.645148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.645163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.655040] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.655141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.655156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.655161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.655165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.655177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.665039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.665165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.665179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.665184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.665188] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.665200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.675053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.675158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.675171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.675176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.675180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.675192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.685092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.685191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.685204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.685213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.685217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.685229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.695151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.695298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.695311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.695316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.695320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.695333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.705197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.705306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.705320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.705325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.705329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.705341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.715175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.715277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.715295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.715300] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.715304] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.715316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.725218] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.725323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.725336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.725340] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.725345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.725357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.735244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.075 [2024-05-15 10:30:53.735356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.075 [2024-05-15 10:30:53.735369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.075 [2024-05-15 10:30:53.735374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.075 [2024-05-15 10:30:53.735378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.075 [2024-05-15 10:30:53.735390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.075 qpair failed and we were unable to recover it. 00:37:08.075 [2024-05-15 10:30:53.745294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.745399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.745412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.745417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.745421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.745433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.755280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.755378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.755391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.755396] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.755401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.755413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.765345] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.765444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.765457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.765463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.765467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.765480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.775352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.775448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.775464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.775469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.775473] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.775485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.785413] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.785516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.785529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.785534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.785538] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.785550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.795398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.795495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.795508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.795513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.795517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.795530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.805321] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.805417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.805430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.805435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.805439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.805452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.815397] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.815492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.815506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.815511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.815515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.815530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.825363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.825463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.825477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.825482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.825486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.825498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.835430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.835528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.835541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.835546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.835551] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.835562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 [2024-05-15 10:30:53.845529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.076 [2024-05-15 10:30:53.845628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.076 [2024-05-15 10:30:53.845641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.076 [2024-05-15 10:30:53.845646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.076 [2024-05-15 10:30:53.845651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cbc000b90 00:37:08.076 [2024-05-15 10:30:53.845663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:08.076 qpair failed and we were unable to recover it. 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.076 Read completed with error (sct=0, sc=8) 00:37:08.076 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 [2024-05-15 10:30:53.846057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:08.077 [2024-05-15 10:30:53.855585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.077 [2024-05-15 10:30:53.855719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.077 [2024-05-15 10:30:53.855741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.077 [2024-05-15 10:30:53.855750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.077 [2024-05-15 10:30:53.855757] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12c22c0 00:37:08.077 [2024-05-15 10:30:53.855775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:08.077 qpair failed and we were unable to recover it. 00:37:08.077 [2024-05-15 10:30:53.865515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.077 [2024-05-15 10:30:53.865646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.077 [2024-05-15 10:30:53.865664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.077 [2024-05-15 10:30:53.865671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.077 [2024-05-15 10:30:53.865678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12c22c0 00:37:08.077 [2024-05-15 10:30:53.865695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:08.077 qpair failed and we were unable to recover it. 00:37:08.077 [2024-05-15 10:30:53.866110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12cfe80 is same with the state(5) to be set 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Write completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.077 starting I/O failed 00:37:08.077 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 [2024-05-15 10:30:53.867015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Write completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 Read completed with error (sct=0, sc=8) 00:37:08.339 starting I/O failed 00:37:08.339 [2024-05-15 10:30:53.867687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:08.339 [2024-05-15 10:30:53.875792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.339 [2024-05-15 10:30:53.876128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.339 [2024-05-15 10:30:53.876182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.339 [2024-05-15 10:30:53.876205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.339 [2024-05-15 10:30:53.876226] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cb4000b90 00:37:08.339 [2024-05-15 10:30:53.876274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:08.339 qpair failed and we were unable to recover it. 00:37:08.339 [2024-05-15 10:30:53.885764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.339 [2024-05-15 10:30:53.886036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.339 [2024-05-15 10:30:53.886074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.339 [2024-05-15 10:30:53.886091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.339 [2024-05-15 10:30:53.886106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cb4000b90 00:37:08.339 [2024-05-15 10:30:53.886141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:08.339 qpair failed and we were unable to recover it. 00:37:08.339 [2024-05-15 10:30:53.895874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.339 [2024-05-15 10:30:53.896214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.339 [2024-05-15 10:30:53.896283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.339 [2024-05-15 10:30:53.896325] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.339 [2024-05-15 10:30:53.896345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cc4000b90 00:37:08.339 [2024-05-15 10:30:53.896400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:08.339 qpair failed and we were unable to recover it. 00:37:08.339 [2024-05-15 10:30:53.905849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:08.339 [2024-05-15 10:30:53.906140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:08.339 [2024-05-15 10:30:53.906191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:08.339 [2024-05-15 10:30:53.906209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:08.339 [2024-05-15 10:30:53.906223] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cc4000b90 00:37:08.339 [2024-05-15 10:30:53.906265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:08.339 qpair failed and we were unable to recover it. 00:37:08.339 [2024-05-15 10:30:53.906818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cfe80 (9): Bad file descriptor 00:37:08.339 Initializing NVMe Controllers 00:37:08.339 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:08.339 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:08.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:08.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:08.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:08.339 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:08.339 Initialization complete. Launching workers. 00:37:08.339 Starting thread on core 1 00:37:08.339 Starting thread on core 2 00:37:08.339 Starting thread on core 3 00:37:08.339 Starting thread on core 0 00:37:08.339 10:30:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:08.339 00:37:08.339 real 0m11.338s 00:37:08.339 user 0m20.769s 00:37:08.339 sys 0m3.793s 00:37:08.339 10:30:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:08.339 10:30:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:08.339 ************************************ 00:37:08.339 END TEST nvmf_target_disconnect_tc2 00:37:08.339 ************************************ 00:37:08.339 10:30:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:08.339 10:30:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:08.339 10:30:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:08.340 10:30:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:08.340 10:30:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:08.340 10:30:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:08.340 10:30:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:08.340 10:30:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:08.340 10:30:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:08.340 rmmod nvme_tcp 00:37:08.340 rmmod nvme_fabrics 00:37:08.340 rmmod nvme_keyring 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3092628 ']' 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3092628 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' -z 3092628 ']' 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # kill -0 3092628 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # uname 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3092628 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_4 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_4 = sudo ']' 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3092628' 00:37:08.340 killing process with pid 3092628 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # kill 3092628 00:37:08.340 [2024-05-15 10:30:54.088592] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:08.340 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # wait 3092628 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:08.600 10:30:54 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.513 10:30:56 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:10.513 00:37:10.513 real 0m20.974s 00:37:10.513 user 0m48.542s 00:37:10.513 sys 0m9.289s 00:37:10.513 10:30:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:10.513 10:30:56 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:10.513 ************************************ 00:37:10.513 END TEST nvmf_target_disconnect 00:37:10.513 ************************************ 00:37:10.775 10:30:56 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:37:10.775 10:30:56 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:10.775 10:30:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.775 10:30:56 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:37:10.775 00:37:10.775 real 30m34.434s 00:37:10.775 user 77m40.664s 00:37:10.775 sys 8m16.634s 00:37:10.775 10:30:56 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:10.775 10:30:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:10.775 ************************************ 00:37:10.775 END TEST nvmf_tcp 00:37:10.775 ************************************ 00:37:10.775 10:30:56 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:37:10.775 10:30:56 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:10.775 10:30:56 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:37:10.775 10:30:56 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:10.775 10:30:56 -- common/autotest_common.sh@10 -- # set +x 00:37:10.775 ************************************ 00:37:10.775 START TEST spdkcli_nvmf_tcp 00:37:10.775 ************************************ 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:10.775 * Looking for test storage... 00:37:10.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.775 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3094458 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3094458 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 3094458 ']' 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:11.037 10:30:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.037 [2024-05-15 10:30:56.646453] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:11.037 [2024-05-15 10:30:56.646514] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094458 ] 00:37:11.037 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.037 [2024-05-15 10:30:56.705479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:11.037 [2024-05-15 10:30:56.737412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.037 [2024-05-15 10:30:56.737555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.610 10:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:11.610 10:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:37:11.610 10:30:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:11.610 10:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:11.610 10:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.872 10:30:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:11.872 10:30:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:11.872 10:30:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:11.872 10:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:11.872 10:30:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:11.872 10:30:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:11.872 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:11.872 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:11.872 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:11.872 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:11.872 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:11.872 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:11.872 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:11.872 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:11.872 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:11.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:11.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:11.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:11.873 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:11.873 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:11.873 ' 00:37:14.423 [2024-05-15 10:30:59.766873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:15.367 [2024-05-15 10:31:00.938296] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:15.367 [2024-05-15 10:31:00.938731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:17.919 [2024-05-15 10:31:03.084887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:19.308 [2024-05-15 10:31:04.930394] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:20.696 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:20.696 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:20.696 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:20.696 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:20.696 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:20.696 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:20.696 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:20.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:20.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:20.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:20.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:20.696 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:20.696 10:31:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:20.696 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:20.696 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.958 10:31:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:20.958 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:20.958 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:20.958 10:31:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:20.958 10:31:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:21.219 10:31:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:21.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:21.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:21.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:21.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:21.219 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:21.219 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:21.219 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:21.219 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:21.219 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:21.219 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:21.219 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:21.219 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:21.219 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:21.219 ' 00:37:26.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:26.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:26.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:26.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:26.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:26.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:26.522 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:26.522 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:26.522 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:26.522 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:26.522 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:26.522 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:26.522 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:26.522 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3094458 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 3094458 ']' 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 3094458 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3094458 00:37:26.784 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3094458' 00:37:26.785 killing process with pid 3094458 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 3094458 00:37:26.785 [2024-05-15 10:31:12.451962] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 3094458 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3094458 ']' 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3094458 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 3094458 ']' 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 3094458 00:37:26.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3094458) - No such process 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 3094458 is not found' 00:37:26.785 Process with pid 3094458 is not found 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:26.785 00:37:26.785 real 0m16.121s 00:37:26.785 user 0m33.960s 00:37:26.785 sys 0m0.774s 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:26.785 10:31:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.785 ************************************ 00:37:26.785 END TEST spdkcli_nvmf_tcp 00:37:26.785 ************************************ 00:37:27.048 10:31:12 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:27.048 10:31:12 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:37:27.048 10:31:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:27.048 10:31:12 -- common/autotest_common.sh@10 -- # set +x 00:37:27.048 ************************************ 00:37:27.048 START TEST nvmf_identify_passthru 00:37:27.048 ************************************ 00:37:27.048 10:31:12 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:27.048 * Looking for test storage... 00:37:27.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:27.048 10:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:27.048 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:27.049 10:31:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:27.049 10:31:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:27.049 10:31:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:27.049 10:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:27.049 10:31:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:27.049 10:31:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:27.049 10:31:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:27.049 10:31:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.049 10:31:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:27.049 10:31:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:27.049 10:31:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:27.049 10:31:12 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:37:27.049 10:31:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:33.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:33.707 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:33.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:33.708 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:33.708 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:33.708 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:33.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:33.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:37:33.970 00:37:33.970 --- 10.0.0.2 ping statistics --- 00:37:33.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.970 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:33.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:33.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:37:33.970 00:37:33.970 --- 10.0.0.1 ping statistics --- 00:37:33.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:33.970 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:33.970 10:31:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:33.970 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:33.970 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:33.970 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:37:34.233 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 1 == 0 )) 00:37:34.233 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:65:00.0 00:37:34.233 10:31:19 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:65:00.0 00:37:34.233 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:34.233 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:34.233 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:34.233 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:34.233 10:31:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:34.233 EAL: No free 2048 kB hugepages reported on node 1 00:37:34.494 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:37:34.494 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:34.494 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:34.494 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:34.756 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3101212 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:35.018 10:31:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3101212 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 3101212 ']' 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:35.018 10:31:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.018 [2024-05-15 10:31:20.801005] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:35.018 [2024-05-15 10:31:20.801053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:35.280 EAL: No free 2048 kB hugepages reported on node 1 00:37:35.280 [2024-05-15 10:31:20.864857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:35.280 [2024-05-15 10:31:20.896503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:35.280 [2024-05-15 10:31:20.896541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:35.280 [2024-05-15 10:31:20.896548] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:35.280 [2024-05-15 10:31:20.896554] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:35.280 [2024-05-15 10:31:20.896560] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:35.280 [2024-05-15 10:31:20.896719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:35.280 [2024-05-15 10:31:20.896840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:35.280 [2024-05-15 10:31:20.897001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:35.280 [2024-05-15 10:31:20.897002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:37:35.854 10:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.854 INFO: Log level set to 20 00:37:35.854 INFO: Requests: 00:37:35.854 { 00:37:35.854 "jsonrpc": "2.0", 00:37:35.854 "method": "nvmf_set_config", 00:37:35.854 "id": 1, 00:37:35.854 "params": { 00:37:35.854 "admin_cmd_passthru": { 00:37:35.854 "identify_ctrlr": true 00:37:35.854 } 00:37:35.854 } 00:37:35.854 } 00:37:35.854 00:37:35.854 INFO: response: 00:37:35.854 { 00:37:35.854 "jsonrpc": "2.0", 00:37:35.854 "id": 1, 00:37:35.854 "result": true 00:37:35.854 } 00:37:35.854 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:35.854 10:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:35.854 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:35.854 INFO: Setting log level to 20 00:37:35.854 INFO: Setting log level to 20 00:37:35.854 INFO: Log level set to 20 00:37:35.854 INFO: Log level set to 20 00:37:35.854 INFO: Requests: 00:37:35.854 { 00:37:35.854 "jsonrpc": "2.0", 00:37:35.854 "method": "framework_start_init", 00:37:35.854 "id": 1 00:37:35.854 } 00:37:35.854 00:37:35.854 INFO: Requests: 00:37:35.854 { 00:37:35.854 "jsonrpc": "2.0", 00:37:35.854 "method": "framework_start_init", 00:37:35.854 "id": 1 00:37:35.854 } 00:37:35.854 00:37:36.116 [2024-05-15 10:31:21.649018] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:36.116 INFO: response: 00:37:36.116 { 00:37:36.116 "jsonrpc": "2.0", 00:37:36.116 "id": 1, 00:37:36.116 "result": true 00:37:36.116 } 00:37:36.116 00:37:36.116 INFO: response: 00:37:36.116 { 00:37:36.116 "jsonrpc": "2.0", 00:37:36.116 "id": 1, 00:37:36.116 "result": true 00:37:36.116 } 00:37:36.116 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.116 10:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.116 INFO: Setting log level to 40 00:37:36.116 INFO: Setting log level to 40 00:37:36.116 INFO: Setting log level to 40 00:37:36.116 [2024-05-15 10:31:21.662255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.116 10:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.116 10:31:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.116 10:31:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.377 Nvme0n1 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.377 [2024-05-15 10:31:22.038279] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:36.377 [2024-05-15 10:31:22.038540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.377 [ 00:37:36.377 { 00:37:36.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:36.377 "subtype": "Discovery", 00:37:36.377 "listen_addresses": [], 00:37:36.377 "allow_any_host": true, 00:37:36.377 "hosts": [] 00:37:36.377 }, 00:37:36.377 { 00:37:36.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:36.377 "subtype": "NVMe", 00:37:36.377 "listen_addresses": [ 00:37:36.377 { 00:37:36.377 "trtype": "TCP", 00:37:36.377 "adrfam": "IPv4", 00:37:36.377 "traddr": "10.0.0.2", 00:37:36.377 "trsvcid": "4420" 00:37:36.377 } 00:37:36.377 ], 00:37:36.377 "allow_any_host": true, 00:37:36.377 "hosts": [], 00:37:36.377 "serial_number": "SPDK00000000000001", 00:37:36.377 "model_number": "SPDK bdev Controller", 00:37:36.377 "max_namespaces": 1, 00:37:36.377 "min_cntlid": 1, 00:37:36.377 "max_cntlid": 65519, 00:37:36.377 "namespaces": [ 00:37:36.377 { 00:37:36.377 "nsid": 1, 00:37:36.377 "bdev_name": "Nvme0n1", 00:37:36.377 "name": "Nvme0n1", 00:37:36.377 "nguid": "3634473052605487002538450000003C", 00:37:36.377 "uuid": "36344730-5260-5487-0025-38450000003c" 00:37:36.377 } 00:37:36.377 ] 00:37:36.377 } 00:37:36.377 ] 00:37:36.377 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:36.377 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:36.377 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.639 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:37:36.639 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:36.639 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:36.639 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:36.639 EAL: No free 2048 kB hugepages reported on node 1 00:37:36.901 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:36.901 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:36.901 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:36.901 10:31:22 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:36.901 rmmod nvme_tcp 00:37:36.901 rmmod nvme_fabrics 00:37:36.901 rmmod nvme_keyring 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3101212 ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3101212 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 3101212 ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 3101212 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3101212 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3101212' 00:37:36.901 killing process with pid 3101212 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 3101212 00:37:36.901 [2024-05-15 10:31:22.646166] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:36.901 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 3101212 00:37:37.162 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:37.162 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:37.162 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:37.162 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:37.162 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:37.162 10:31:22 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.162 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:37.162 10:31:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.718 10:31:24 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:39.718 00:37:39.718 real 0m12.310s 00:37:39.718 user 0m10.345s 00:37:39.718 sys 0m5.671s 00:37:39.718 10:31:24 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:37:39.718 10:31:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:39.718 ************************************ 00:37:39.718 END TEST nvmf_identify_passthru 00:37:39.718 ************************************ 00:37:39.718 10:31:24 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:39.718 10:31:24 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:37:39.718 10:31:24 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:39.718 10:31:25 -- common/autotest_common.sh@10 -- # set +x 00:37:39.718 ************************************ 00:37:39.718 START TEST nvmf_dif 00:37:39.718 ************************************ 00:37:39.718 10:31:25 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:39.718 * Looking for test storage... 00:37:39.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.718 10:31:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.718 10:31:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.718 10:31:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.718 10:31:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.718 10:31:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.718 10:31:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.718 10:31:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.718 10:31:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:39.718 10:31:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:39.718 10:31:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:39.718 10:31:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:39.718 10:31:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:39.718 10:31:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:39.718 10:31:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.718 10:31:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:39.718 10:31:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:39.718 10:31:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:37:39.718 10:31:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:46.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:46.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:46.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.315 10:31:31 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:46.316 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:46.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:46.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:37:46.316 00:37:46.316 --- 10.0.0.2 ping statistics --- 00:37:46.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.316 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:46.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:46.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:37:46.316 00:37:46.316 --- 10.0.0.1 ping statistics --- 00:37:46.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.316 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:46.316 10:31:31 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:49.626 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:49.626 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:49.626 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:49.888 10:31:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:49.888 10:31:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3107081 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3107081 00:37:49.888 10:31:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 3107081 ']' 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:49.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:37:49.888 10:31:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:49.888 [2024-05-15 10:31:35.544961] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:37:49.888 [2024-05-15 10:31:35.545013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.888 EAL: No free 2048 kB hugepages reported on node 1 00:37:49.888 [2024-05-15 10:31:35.613218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:49.888 [2024-05-15 10:31:35.645457] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.888 [2024-05-15 10:31:35.645495] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.888 [2024-05-15 10:31:35.645503] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:49.888 [2024-05-15 10:31:35.645510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:49.888 [2024-05-15 10:31:35.645515] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.888 [2024-05-15 10:31:35.645533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:37:50.833 10:31:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 10:31:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.833 10:31:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:50.833 10:31:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 [2024-05-15 10:31:36.345901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.833 10:31:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 ************************************ 00:37:50.833 START TEST fio_dif_1_default 00:37:50.833 ************************************ 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 bdev_null0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:50.833 [2024-05-15 10:31:36.438086] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:50.833 [2024-05-15 10:31:36.438265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:50.833 { 00:37:50.833 "params": { 00:37:50.833 "name": "Nvme$subsystem", 00:37:50.833 "trtype": "$TEST_TRANSPORT", 00:37:50.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.833 "adrfam": "ipv4", 00:37:50.833 "trsvcid": "$NVMF_PORT", 00:37:50.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.833 "hdgst": ${hdgst:-false}, 00:37:50.833 "ddgst": ${ddgst:-false} 00:37:50.833 }, 00:37:50.833 "method": "bdev_nvme_attach_controller" 00:37:50.833 } 00:37:50.833 EOF 00:37:50.833 )") 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:37:50.833 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:50.834 "params": { 00:37:50.834 "name": "Nvme0", 00:37:50.834 "trtype": "tcp", 00:37:50.834 "traddr": "10.0.0.2", 00:37:50.834 "adrfam": "ipv4", 00:37:50.834 "trsvcid": "4420", 00:37:50.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:50.834 "hdgst": false, 00:37:50.834 "ddgst": false 00:37:50.834 }, 00:37:50.834 "method": "bdev_nvme_attach_controller" 00:37:50.834 }' 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:50.834 10:31:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:51.095 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:51.095 fio-3.35 00:37:51.095 Starting 1 thread 00:37:51.095 EAL: No free 2048 kB hugepages reported on node 1 00:38:03.428 00:38:03.428 filename0: (groupid=0, jobs=1): err= 0: pid=3107596: Wed May 15 10:31:47 2024 00:38:03.428 read: IOPS=177, BW=711KiB/s (728kB/s)(7120KiB/10020msec) 00:38:03.428 slat (nsec): min=5659, max=80227, avg=6569.43, stdev=2385.21 00:38:03.428 clat (usec): min=1805, max=44537, avg=22497.99, stdev=20350.45 00:38:03.428 lat (usec): min=1812, max=44586, avg=22504.56, stdev=20350.41 00:38:03.428 clat percentiles (usec): 00:38:03.428 | 1.00th=[ 1844], 5.00th=[ 2040], 10.00th=[ 2073], 20.00th=[ 2114], 00:38:03.428 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[41681], 60.00th=[42730], 00:38:03.428 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:38:03.428 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:38:03.428 | 99.99th=[44303] 00:38:03.428 bw ( KiB/s): min= 704, max= 768, per=99.92%, avg=710.40, stdev=19.70, samples=20 00:38:03.428 iops : min= 176, max= 192, avg=177.60, stdev= 4.92, samples=20 00:38:03.428 lat (msec) : 2=3.37%, 4=46.52%, 50=50.11% 00:38:03.428 cpu : usr=95.24%, sys=4.54%, ctx=17, majf=0, minf=279 00:38:03.428 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:03.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:03.428 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:03.428 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:03.428 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:03.428 00:38:03.428 Run status group 0 (all jobs): 00:38:03.428 READ: bw=711KiB/s (728kB/s), 711KiB/s-711KiB/s (728kB/s-728kB/s), io=7120KiB (7291kB), run=10020-10020msec 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.428 00:38:03.428 real 0m11.076s 00:38:03.428 user 0m23.844s 00:38:03.428 sys 0m0.763s 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 ************************************ 00:38:03.428 END TEST fio_dif_1_default 00:38:03.428 ************************************ 00:38:03.428 10:31:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:03.428 10:31:47 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:38:03.428 10:31:47 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 ************************************ 00:38:03.428 START TEST fio_dif_1_multi_subsystems 00:38:03.428 ************************************ 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 bdev_null0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 [2024-05-15 10:31:47.602799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.428 bdev_null1 00:38:03.428 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:03.429 { 00:38:03.429 "params": { 00:38:03.429 "name": "Nvme$subsystem", 00:38:03.429 "trtype": "$TEST_TRANSPORT", 00:38:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:03.429 "adrfam": "ipv4", 00:38:03.429 "trsvcid": "$NVMF_PORT", 00:38:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:03.429 "hdgst": ${hdgst:-false}, 00:38:03.429 "ddgst": ${ddgst:-false} 00:38:03.429 }, 00:38:03.429 "method": "bdev_nvme_attach_controller" 00:38:03.429 } 00:38:03.429 EOF 00:38:03.429 )") 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:03.429 { 00:38:03.429 "params": { 00:38:03.429 "name": "Nvme$subsystem", 00:38:03.429 "trtype": "$TEST_TRANSPORT", 00:38:03.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:03.429 "adrfam": "ipv4", 00:38:03.429 "trsvcid": "$NVMF_PORT", 00:38:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:03.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:03.429 "hdgst": ${hdgst:-false}, 00:38:03.429 "ddgst": ${ddgst:-false} 00:38:03.429 }, 00:38:03.429 "method": "bdev_nvme_attach_controller" 00:38:03.429 } 00:38:03.429 EOF 00:38:03.429 )") 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:03.429 "params": { 00:38:03.429 "name": "Nvme0", 00:38:03.429 "trtype": "tcp", 00:38:03.429 "traddr": "10.0.0.2", 00:38:03.429 "adrfam": "ipv4", 00:38:03.429 "trsvcid": "4420", 00:38:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.429 "hdgst": false, 00:38:03.429 "ddgst": false 00:38:03.429 }, 00:38:03.429 "method": "bdev_nvme_attach_controller" 00:38:03.429 },{ 00:38:03.429 "params": { 00:38:03.429 "name": "Nvme1", 00:38:03.429 "trtype": "tcp", 00:38:03.429 "traddr": "10.0.0.2", 00:38:03.429 "adrfam": "ipv4", 00:38:03.429 "trsvcid": "4420", 00:38:03.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:03.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:03.429 "hdgst": false, 00:38:03.429 "ddgst": false 00:38:03.429 }, 00:38:03.429 "method": "bdev_nvme_attach_controller" 00:38:03.429 }' 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:03.429 10:31:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:03.429 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:03.429 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:03.429 fio-3.35 00:38:03.429 Starting 2 threads 00:38:03.429 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.465 00:38:13.465 filename0: (groupid=0, jobs=1): err= 0: pid=3110049: Wed May 15 10:31:58 2024 00:38:13.465 read: IOPS=178, BW=713KiB/s (730kB/s)(7152KiB/10032msec) 00:38:13.465 slat (nsec): min=5643, max=26919, avg=6652.02, stdev=1390.77 00:38:13.465 clat (usec): min=1765, max=44603, avg=22423.99, stdev=20256.95 00:38:13.465 lat (usec): min=1771, max=44630, avg=22430.64, stdev=20256.95 00:38:13.465 clat percentiles (usec): 00:38:13.465 | 1.00th=[ 1958], 5.00th=[ 2008], 10.00th=[ 2057], 20.00th=[ 2114], 00:38:13.465 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[41681], 60.00th=[42206], 00:38:13.465 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:38:13.465 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:38:13.465 | 99.99th=[44827] 00:38:13.465 bw ( KiB/s): min= 704, max= 768, per=65.45%, avg=713.60, stdev=18.28, samples=20 00:38:13.465 iops : min= 176, max= 192, avg=178.40, stdev= 4.57, samples=20 00:38:13.465 lat (msec) : 2=3.47%, 4=46.42%, 50=50.11% 00:38:13.465 cpu : usr=97.02%, sys=2.78%, ctx=15, majf=0, minf=109 00:38:13.465 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:13.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.465 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.465 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:13.465 filename1: (groupid=0, jobs=1): err= 0: pid=3110050: Wed May 15 10:31:58 2024 00:38:13.465 read: IOPS=94, BW=377KiB/s (386kB/s)(3776KiB/10025msec) 00:38:13.465 slat (nsec): min=5646, max=26015, avg=6635.34, stdev=1485.43 00:38:13.465 clat (usec): min=41839, max=44600, avg=42459.17, stdev=513.28 00:38:13.465 lat (usec): min=41844, max=44626, avg=42465.80, stdev=513.47 00:38:13.465 clat percentiles (usec): 00:38:13.465 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:38:13.465 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:38:13.465 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:38:13.465 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:38:13.465 | 99.99th=[44827] 00:38:13.465 bw ( KiB/s): min= 352, max= 384, per=34.52%, avg=376.00, stdev=14.22, samples=20 00:38:13.465 iops : min= 88, max= 96, avg=94.00, stdev= 3.55, samples=20 00:38:13.465 lat (msec) : 50=100.00% 00:38:13.465 cpu : usr=97.20%, sys=2.60%, ctx=14, majf=0, minf=140 00:38:13.465 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:13.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:13.465 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:13.465 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:13.465 00:38:13.465 Run status group 0 (all jobs): 00:38:13.465 READ: bw=1089KiB/s (1115kB/s), 377KiB/s-713KiB/s (386kB/s-730kB/s), io=10.7MiB (11.2MB), run=10025-10032msec 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 00:38:13.465 real 0m11.416s 00:38:13.465 user 0m36.416s 00:38:13.465 sys 0m0.874s 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:13.465 10:31:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 ************************************ 00:38:13.465 END TEST fio_dif_1_multi_subsystems 00:38:13.465 ************************************ 00:38:13.465 10:31:59 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:13.465 10:31:59 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:38:13.465 10:31:59 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:13.465 10:31:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 ************************************ 00:38:13.465 START TEST fio_dif_rand_params 00:38:13.465 ************************************ 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 bdev_null0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:13.465 [2024-05-15 10:31:59.110753] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:13.465 { 00:38:13.465 "params": { 00:38:13.465 "name": "Nvme$subsystem", 00:38:13.465 "trtype": "$TEST_TRANSPORT", 00:38:13.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:13.465 "adrfam": "ipv4", 00:38:13.465 "trsvcid": "$NVMF_PORT", 00:38:13.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:13.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:13.465 "hdgst": ${hdgst:-false}, 00:38:13.465 "ddgst": ${ddgst:-false} 00:38:13.465 }, 00:38:13.465 "method": "bdev_nvme_attach_controller" 00:38:13.465 } 00:38:13.465 EOF 00:38:13.465 )") 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:13.465 "params": { 00:38:13.465 "name": "Nvme0", 00:38:13.465 "trtype": "tcp", 00:38:13.465 "traddr": "10.0.0.2", 00:38:13.465 "adrfam": "ipv4", 00:38:13.465 "trsvcid": "4420", 00:38:13.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:13.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:13.465 "hdgst": false, 00:38:13.465 "ddgst": false 00:38:13.465 }, 00:38:13.465 "method": "bdev_nvme_attach_controller" 00:38:13.465 }' 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:13.465 10:31:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:14.035 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:14.035 ... 00:38:14.035 fio-3.35 00:38:14.035 Starting 3 threads 00:38:14.035 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.342 00:38:19.342 filename0: (groupid=0, jobs=1): err= 0: pid=3112313: Wed May 15 10:32:05 2024 00:38:19.342 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(52.9MiB/5047msec) 00:38:19.342 slat (nsec): min=8268, max=31526, avg=8917.50, stdev=1279.80 00:38:19.342 clat (usec): min=7089, max=62219, avg=35772.00, stdev=20930.96 00:38:19.342 lat (usec): min=7097, max=62251, avg=35780.92, stdev=20931.12 00:38:19.342 clat percentiles (usec): 00:38:19.342 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9896], 00:38:19.342 | 30.00th=[11207], 40.00th=[49021], 50.00th=[51119], 60.00th=[52167], 00:38:19.342 | 70.00th=[52691], 80.00th=[53216], 90.00th=[54264], 95.00th=[54264], 00:38:19.342 | 99.00th=[55313], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:38:19.342 | 99.99th=[62129] 00:38:19.342 bw ( KiB/s): min= 8448, max=14592, per=30.20%, avg=10752.00, stdev=1773.62, samples=10 00:38:19.342 iops : min= 66, max= 114, avg=84.00, stdev=13.86, samples=10 00:38:19.342 lat (msec) : 10=21.51%, 20=18.20%, 50=0.71%, 100=59.57% 00:38:19.342 cpu : usr=96.89%, sys=2.75%, ctx=15, majf=0, minf=45 00:38:19.342 IO depths : 1=16.5%, 2=83.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:19.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.342 issued rwts: total=423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:19.342 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:19.342 filename0: (groupid=0, jobs=1): err= 0: pid=3112314: Wed May 15 10:32:05 2024 00:38:19.342 read: IOPS=96, BW=12.0MiB/s (12.6MB/s)(60.4MiB/5031msec) 00:38:19.342 slat (nsec): min=5693, max=35928, avg=9381.75, stdev=2359.35 00:38:19.342 clat (usec): min=6998, max=69455, avg=31230.51, stdev=21091.79 00:38:19.342 lat (usec): min=7006, max=69488, avg=31239.89, stdev=21091.98 00:38:19.342 clat percentiles (usec): 00:38:19.342 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[11469], 20.00th=[12649], 00:38:19.342 | 30.00th=[14353], 40.00th=[15664], 50.00th=[16909], 60.00th=[20055], 00:38:19.342 | 70.00th=[54789], 80.00th=[56886], 90.00th=[58459], 95.00th=[59507], 00:38:19.342 | 99.00th=[62653], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:38:19.342 | 99.99th=[69731] 00:38:19.342 bw ( KiB/s): min= 9216, max=14592, per=34.51%, avg=12288.00, stdev=1810.19, samples=10 00:38:19.342 iops : min= 72, max= 114, avg=96.00, stdev=14.14, samples=10 00:38:19.342 lat (msec) : 10=3.73%, 20=55.90%, 50=0.62%, 100=39.75% 00:38:19.342 cpu : usr=96.14%, sys=3.14%, ctx=36, majf=0, minf=117 00:38:19.342 IO depths : 1=10.4%, 2=89.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:19.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.342 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:19.342 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:19.342 filename0: (groupid=0, jobs=1): err= 0: pid=3112315: Wed May 15 10:32:05 2024 00:38:19.342 read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(62.2MiB/5009msec) 00:38:19.342 slat (nsec): min=5674, max=31639, avg=8147.56, stdev=1909.81 00:38:19.342 clat (usec): min=7078, max=65759, avg=30160.01, stdev=21307.19 00:38:19.342 lat (usec): min=7087, max=65790, avg=30168.16, stdev=21307.28 00:38:19.342 clat percentiles (usec): 00:38:19.342 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[12256], 00:38:19.342 | 30.00th=[13698], 40.00th=[14746], 50.00th=[16057], 60.00th=[18744], 00:38:19.342 | 70.00th=[54789], 80.00th=[56886], 90.00th=[58983], 95.00th=[60031], 00:38:19.342 | 99.00th=[61604], 99.50th=[65274], 99.90th=[65799], 99.95th=[65799], 00:38:19.342 | 99.99th=[65799] 00:38:19.342 bw ( KiB/s): min=10752, max=18432, per=35.59%, avg=12672.00, stdev=2209.62, samples=10 00:38:19.342 iops : min= 84, max= 144, avg=99.00, stdev=17.26, samples=10 00:38:19.342 lat (msec) : 10=7.43%, 20=53.82%, 50=1.00%, 100=37.75% 00:38:19.342 cpu : usr=96.77%, sys=2.84%, ctx=10, majf=0, minf=105 00:38:19.342 IO depths : 1=7.6%, 2=92.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:19.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:19.342 issued rwts: total=498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:19.342 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:19.342 00:38:19.342 Run status group 0 (all jobs): 00:38:19.342 READ: bw=34.8MiB/s (36.5MB/s), 10.5MiB/s-12.4MiB/s (11.0MB/s-13.0MB/s), io=176MiB (184MB), run=5009-5047msec 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.604 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 bdev_null0 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 [2024-05-15 10:32:05.224086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 bdev_null1 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 bdev_null2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.605 { 00:38:19.605 "params": { 00:38:19.605 "name": "Nvme$subsystem", 00:38:19.605 "trtype": "$TEST_TRANSPORT", 00:38:19.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.605 "adrfam": "ipv4", 00:38:19.605 "trsvcid": "$NVMF_PORT", 00:38:19.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.605 "hdgst": ${hdgst:-false}, 00:38:19.605 "ddgst": ${ddgst:-false} 00:38:19.605 }, 00:38:19.605 "method": "bdev_nvme_attach_controller" 00:38:19.605 } 00:38:19.605 EOF 00:38:19.605 )") 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.605 { 00:38:19.605 "params": { 00:38:19.605 "name": "Nvme$subsystem", 00:38:19.605 "trtype": "$TEST_TRANSPORT", 00:38:19.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.605 "adrfam": "ipv4", 00:38:19.605 "trsvcid": "$NVMF_PORT", 00:38:19.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.605 "hdgst": ${hdgst:-false}, 00:38:19.605 "ddgst": ${ddgst:-false} 00:38:19.605 }, 00:38:19.605 "method": "bdev_nvme_attach_controller" 00:38:19.605 } 00:38:19.605 EOF 00:38:19.605 )") 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:19.605 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:19.605 { 00:38:19.605 "params": { 00:38:19.605 "name": "Nvme$subsystem", 00:38:19.605 "trtype": "$TEST_TRANSPORT", 00:38:19.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:19.605 "adrfam": "ipv4", 00:38:19.605 "trsvcid": "$NVMF_PORT", 00:38:19.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:19.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:19.605 "hdgst": ${hdgst:-false}, 00:38:19.606 "ddgst": ${ddgst:-false} 00:38:19.606 }, 00:38:19.606 "method": "bdev_nvme_attach_controller" 00:38:19.606 } 00:38:19.606 EOF 00:38:19.606 )") 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:19.606 "params": { 00:38:19.606 "name": "Nvme0", 00:38:19.606 "trtype": "tcp", 00:38:19.606 "traddr": "10.0.0.2", 00:38:19.606 "adrfam": "ipv4", 00:38:19.606 "trsvcid": "4420", 00:38:19.606 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:19.606 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:19.606 "hdgst": false, 00:38:19.606 "ddgst": false 00:38:19.606 }, 00:38:19.606 "method": "bdev_nvme_attach_controller" 00:38:19.606 },{ 00:38:19.606 "params": { 00:38:19.606 "name": "Nvme1", 00:38:19.606 "trtype": "tcp", 00:38:19.606 "traddr": "10.0.0.2", 00:38:19.606 "adrfam": "ipv4", 00:38:19.606 "trsvcid": "4420", 00:38:19.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:19.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:19.606 "hdgst": false, 00:38:19.606 "ddgst": false 00:38:19.606 }, 00:38:19.606 "method": "bdev_nvme_attach_controller" 00:38:19.606 },{ 00:38:19.606 "params": { 00:38:19.606 "name": "Nvme2", 00:38:19.606 "trtype": "tcp", 00:38:19.606 "traddr": "10.0.0.2", 00:38:19.606 "adrfam": "ipv4", 00:38:19.606 "trsvcid": "4420", 00:38:19.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:19.606 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:19.606 "hdgst": false, 00:38:19.606 "ddgst": false 00:38:19.606 }, 00:38:19.606 "method": "bdev_nvme_attach_controller" 00:38:19.606 }' 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:19.606 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:19.906 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:19.906 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:19.906 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:19.906 10:32:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:20.171 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:20.171 ... 00:38:20.171 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:20.171 ... 00:38:20.171 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:20.171 ... 00:38:20.171 fio-3.35 00:38:20.171 Starting 24 threads 00:38:20.171 EAL: No free 2048 kB hugepages reported on node 1 00:38:32.417 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113660: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=504, BW=2018KiB/s (2066kB/s)(19.8MiB/10027msec) 00:38:32.417 slat (nsec): min=2909, max=78906, avg=11753.99, stdev=8526.28 00:38:32.417 clat (usec): min=5077, max=54642, avg=31642.70, stdev=4848.04 00:38:32.417 lat (usec): min=5083, max=54679, avg=31654.46, stdev=4848.84 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[ 7177], 5.00th=[24249], 10.00th=[30016], 20.00th=[30802], 00:38:32.417 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:38:32.417 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[37487], 00:38:32.417 | 99.00th=[46400], 99.50th=[49546], 99.90th=[54264], 99.95th=[54789], 00:38:32.417 | 99.99th=[54789] 00:38:32.417 bw ( KiB/s): min= 1824, max= 2480, per=4.34%, avg=2015.40, stdev=126.10, samples=20 00:38:32.417 iops : min= 456, max= 620, avg=503.70, stdev=31.48, samples=20 00:38:32.417 lat (msec) : 10=1.38%, 20=1.60%, 50=96.62%, 100=0.40% 00:38:32.417 cpu : usr=98.42%, sys=0.86%, ctx=21, majf=0, minf=87 00:38:32.417 IO depths : 1=1.4%, 2=2.9%, 4=11.3%, 8=73.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=90.4%, 8=4.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=5058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113661: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=450, BW=1802KiB/s (1845kB/s)(17.7MiB/10055msec) 00:38:32.417 slat (nsec): min=5829, max=74715, avg=14834.96, stdev=11801.82 00:38:32.417 clat (usec): min=16825, max=81217, avg=35418.56, stdev=7399.10 00:38:32.417 lat (usec): min=16846, max=81226, avg=35433.40, stdev=7398.16 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[21103], 5.00th=[25822], 10.00th=[29754], 20.00th=[31065], 00:38:32.417 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[33424], 00:38:32.417 | 70.00th=[39584], 80.00th=[41681], 90.00th=[44303], 95.00th=[47449], 00:38:32.417 | 99.00th=[62653], 99.50th=[65799], 99.90th=[81265], 99.95th=[81265], 00:38:32.417 | 99.99th=[81265] 00:38:32.417 bw ( KiB/s): min= 1643, max= 2016, per=3.89%, avg=1804.75, stdev=83.56, samples=20 00:38:32.417 iops : min= 410, max= 504, avg=451.15, stdev=20.97, samples=20 00:38:32.417 lat (msec) : 20=0.33%, 50=96.09%, 100=3.58% 00:38:32.417 cpu : usr=99.04%, sys=0.67%, ctx=60, majf=0, minf=58 00:38:32.417 IO depths : 1=0.3%, 2=0.7%, 4=8.3%, 8=76.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=90.3%, 8=6.0%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113663: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.9MiB/10082msec) 00:38:32.417 slat (usec): min=5, max=395, avg=13.03, stdev=16.56 00:38:32.417 clat (usec): min=16267, max=95673, avg=33175.22, stdev=6230.34 00:38:32.417 lat (usec): min=16274, max=95681, avg=33188.25, stdev=6230.93 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[19792], 5.00th=[23462], 10.00th=[28443], 20.00th=[30540], 00:38:32.417 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:38:32.417 | 70.00th=[32900], 80.00th=[34866], 90.00th=[42206], 95.00th=[44827], 00:38:32.417 | 99.00th=[51119], 99.50th=[54264], 99.90th=[95945], 99.95th=[95945], 00:38:32.417 | 99.99th=[95945] 00:38:32.417 bw ( KiB/s): min= 1744, max= 2048, per=4.15%, avg=1925.80, stdev=78.66, samples=20 00:38:32.417 iops : min= 436, max= 512, avg=481.45, stdev=19.66, samples=20 00:38:32.417 lat (msec) : 20=1.39%, 50=97.27%, 100=1.35% 00:38:32.417 cpu : usr=98.12%, sys=1.07%, ctx=29, majf=0, minf=36 00:38:32.417 IO depths : 1=1.7%, 2=3.3%, 4=11.6%, 8=70.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=90.9%, 8=4.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113664: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=467, BW=1872KiB/s (1917kB/s)(18.4MiB/10057msec) 00:38:32.417 slat (nsec): min=5835, max=95792, avg=16752.29, stdev=12823.46 00:38:32.417 clat (msec): min=14, max=100, avg=34.00, stdev= 7.03 00:38:32.417 lat (msec): min=14, max=100, avg=34.01, stdev= 7.03 00:38:32.417 clat percentiles (msec): 00:38:32.417 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 31], 00:38:32.417 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:38:32.417 | 70.00th=[ 34], 80.00th=[ 40], 90.00th=[ 44], 95.00th=[ 47], 00:38:32.417 | 99.00th=[ 55], 99.50th=[ 60], 99.90th=[ 89], 99.95th=[ 102], 00:38:32.417 | 99.99th=[ 102] 00:38:32.417 bw ( KiB/s): min= 1650, max= 1976, per=4.04%, avg=1874.65, stdev=79.71, samples=20 00:38:32.417 iops : min= 412, max= 494, avg=468.60, stdev=19.97, samples=20 00:38:32.417 lat (msec) : 20=1.21%, 50=96.37%, 100=2.36%, 250=0.06% 00:38:32.417 cpu : usr=98.96%, sys=0.72%, ctx=14, majf=0, minf=52 00:38:32.417 IO depths : 1=0.2%, 2=0.5%, 4=6.1%, 8=78.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=89.9%, 8=7.4%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113665: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.4MiB/10008msec) 00:38:32.417 slat (nsec): min=6023, max=89049, avg=19555.37, stdev=13097.65 00:38:32.417 clat (usec): min=28793, max=71449, avg=31995.51, stdev=2727.80 00:38:32.417 lat (usec): min=28800, max=71456, avg=32015.07, stdev=2727.10 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[29492], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:38:32.417 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:38:32.417 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:38:32.417 | 99.00th=[34866], 99.50th=[52167], 99.90th=[71828], 99.95th=[71828], 00:38:32.417 | 99.99th=[71828] 00:38:32.417 bw ( KiB/s): min= 1916, max= 2048, per=4.29%, avg=1992.89, stdev=64.78, samples=19 00:38:32.417 iops : min= 479, max= 512, avg=498.11, stdev=16.10, samples=19 00:38:32.417 lat (msec) : 50=99.36%, 100=0.64% 00:38:32.417 cpu : usr=99.18%, sys=0.53%, ctx=33, majf=0, minf=57 00:38:32.417 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113666: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.9MiB/10087msec) 00:38:32.417 slat (nsec): min=5833, max=90537, avg=13535.94, stdev=11095.28 00:38:32.417 clat (usec): min=15322, max=97292, avg=33304.83, stdev=7548.64 00:38:32.417 lat (usec): min=15332, max=97317, avg=33318.37, stdev=7550.10 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[18744], 5.00th=[21365], 10.00th=[23462], 20.00th=[30540], 00:38:32.417 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32113], 60.00th=[32637], 00:38:32.417 | 70.00th=[33424], 80.00th=[39584], 90.00th=[42730], 95.00th=[44827], 00:38:32.417 | 99.00th=[52691], 99.50th=[53740], 99.90th=[95945], 99.95th=[95945], 00:38:32.417 | 99.99th=[96994] 00:38:32.417 bw ( KiB/s): min= 1792, max= 2096, per=4.15%, avg=1926.60, stdev=74.90, samples=20 00:38:32.417 iops : min= 448, max= 524, avg=481.65, stdev=18.73, samples=20 00:38:32.417 lat (msec) : 20=2.88%, 50=95.01%, 100=2.11% 00:38:32.417 cpu : usr=99.29%, sys=0.44%, ctx=14, majf=0, minf=40 00:38:32.417 IO depths : 1=1.2%, 2=3.3%, 4=13.7%, 8=69.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113667: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.9MiB/10087msec) 00:38:32.417 slat (usec): min=5, max=100, avg=16.20, stdev=12.66 00:38:32.417 clat (usec): min=17108, max=97128, avg=35064.02, stdev=7280.16 00:38:32.417 lat (usec): min=17128, max=97157, avg=35080.23, stdev=7280.22 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[20841], 5.00th=[24511], 10.00th=[29492], 20.00th=[30802], 00:38:32.417 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[33424], 00:38:32.417 | 70.00th=[38011], 80.00th=[41157], 90.00th=[44303], 95.00th=[46924], 00:38:32.417 | 99.00th=[52691], 99.50th=[56361], 99.90th=[96994], 99.95th=[96994], 00:38:32.417 | 99.99th=[96994] 00:38:32.417 bw ( KiB/s): min= 1584, max= 1952, per=3.94%, avg=1826.60, stdev=101.16, samples=20 00:38:32.417 iops : min= 396, max= 488, avg=456.65, stdev=25.29, samples=20 00:38:32.417 lat (msec) : 20=0.74%, 50=97.08%, 100=2.18% 00:38:32.417 cpu : usr=98.83%, sys=0.78%, ctx=95, majf=0, minf=39 00:38:32.417 IO depths : 1=1.0%, 2=2.0%, 4=9.1%, 8=74.2%, 16=13.7%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=90.4%, 8=6.2%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename0: (groupid=0, jobs=1): err= 0: pid=3113668: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=632, BW=2530KiB/s (2591kB/s)(24.7MiB/10006msec) 00:38:32.417 slat (usec): min=5, max=313, avg= 9.86, stdev=10.24 00:38:32.417 clat (usec): min=5179, max=57971, avg=25229.68, stdev=6123.99 00:38:32.417 lat (usec): min=5192, max=57979, avg=25239.54, stdev=6125.36 00:38:32.417 clat percentiles (usec): 00:38:32.417 | 1.00th=[12649], 5.00th=[18482], 10.00th=[19268], 20.00th=[20579], 00:38:32.417 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22938], 60.00th=[24773], 00:38:32.417 | 70.00th=[30278], 80.00th=[31327], 90.00th=[32375], 95.00th=[33817], 00:38:32.417 | 99.00th=[43254], 99.50th=[50594], 99.90th=[55837], 99.95th=[57934], 00:38:32.417 | 99.99th=[57934] 00:38:32.417 bw ( KiB/s): min= 2104, max= 3024, per=5.49%, avg=2547.32, stdev=257.91, samples=19 00:38:32.417 iops : min= 526, max= 756, avg=636.74, stdev=64.50, samples=19 00:38:32.417 lat (msec) : 10=0.62%, 20=13.71%, 50=85.17%, 100=0.51% 00:38:32.417 cpu : usr=92.83%, sys=3.26%, ctx=56, majf=0, minf=64 00:38:32.417 IO depths : 1=1.5%, 2=3.4%, 4=11.1%, 8=72.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=6330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename1: (groupid=0, jobs=1): err= 0: pid=3113669: Wed May 15 10:32:16 2024 00:38:32.417 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.5MiB/10059msec) 00:38:32.417 slat (nsec): min=6086, max=97141, avg=15434.91, stdev=12051.53 00:38:32.417 clat (msec): min=18, max=100, avg=32.11, stdev= 4.35 00:38:32.417 lat (msec): min=18, max=100, avg=32.12, stdev= 4.35 00:38:32.417 clat percentiles (msec): 00:38:32.417 | 1.00th=[ 29], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:38:32.417 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:38:32.417 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 35], 00:38:32.417 | 99.00th=[ 37], 99.50th=[ 53], 99.90th=[ 101], 99.95th=[ 101], 00:38:32.417 | 99.99th=[ 101] 00:38:32.417 bw ( KiB/s): min= 1777, max= 2059, per=4.29%, avg=1988.50, stdev=80.76, samples=20 00:38:32.417 iops : min= 444, max= 514, avg=497.00, stdev=20.14, samples=20 00:38:32.417 lat (msec) : 20=0.28%, 50=99.08%, 100=0.32%, 250=0.32% 00:38:32.417 cpu : usr=98.91%, sys=0.69%, ctx=93, majf=0, minf=37 00:38:32.417 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:32.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.417 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.417 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.417 filename1: (groupid=0, jobs=1): err= 0: pid=3113670: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=492, BW=1971KiB/s (2019kB/s)(19.4MiB/10052msec) 00:38:32.418 slat (nsec): min=5877, max=99098, avg=17842.10, stdev=11641.56 00:38:32.418 clat (msec): min=19, max=103, avg=32.30, stdev= 4.76 00:38:32.418 lat (msec): min=19, max=103, avg=32.32, stdev= 4.76 00:38:32.418 clat percentiles (msec): 00:38:32.418 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:38:32.418 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:38:32.418 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 34], 95.00th=[ 35], 00:38:32.418 | 99.00th=[ 44], 99.50th=[ 61], 99.90th=[ 102], 99.95th=[ 102], 00:38:32.418 | 99.99th=[ 104] 00:38:32.418 bw ( KiB/s): min= 1792, max= 2048, per=4.26%, avg=1974.20, stdev=84.66, samples=20 00:38:32.418 iops : min= 448, max= 512, avg=493.40, stdev=21.15, samples=20 00:38:32.418 lat (msec) : 20=0.22%, 50=98.97%, 100=0.48%, 250=0.32% 00:38:32.418 cpu : usr=98.11%, sys=0.98%, ctx=78, majf=0, minf=49 00:38:32.418 IO depths : 1=5.7%, 2=11.7%, 4=24.4%, 8=51.4%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename1: (groupid=0, jobs=1): err= 0: pid=3113672: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=457, BW=1830KiB/s (1874kB/s)(18.0MiB/10086msec) 00:38:32.418 slat (nsec): min=5822, max=99042, avg=16172.63, stdev=12744.19 00:38:32.418 clat (msec): min=17, max=103, avg=34.86, stdev= 7.54 00:38:32.418 lat (msec): min=17, max=103, avg=34.88, stdev= 7.54 00:38:32.418 clat percentiles (msec): 00:38:32.418 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 30], 20.00th=[ 31], 00:38:32.418 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:38:32.418 | 70.00th=[ 36], 80.00th=[ 42], 90.00th=[ 44], 95.00th=[ 47], 00:38:32.418 | 99.00th=[ 53], 99.50th=[ 56], 99.90th=[ 104], 99.95th=[ 104], 00:38:32.418 | 99.99th=[ 104] 00:38:32.418 bw ( KiB/s): min= 1664, max= 1944, per=3.96%, avg=1839.40, stdev=65.92, samples=20 00:38:32.418 iops : min= 416, max= 486, avg=459.85, stdev=16.48, samples=20 00:38:32.418 lat (msec) : 20=0.74%, 50=96.55%, 100=2.58%, 250=0.13% 00:38:32.418 cpu : usr=98.95%, sys=0.74%, ctx=13, majf=0, minf=48 00:38:32.418 IO depths : 1=0.7%, 2=1.3%, 4=8.9%, 8=75.2%, 16=13.9%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.4%, 8=6.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename1: (groupid=0, jobs=1): err= 0: pid=3113673: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=457, BW=1829KiB/s (1873kB/s)(18.0MiB/10082msec) 00:38:32.418 slat (nsec): min=6890, max=95829, avg=18122.46, stdev=13311.87 00:38:32.418 clat (msec): min=16, max=100, avg=34.86, stdev= 7.45 00:38:32.418 lat (msec): min=16, max=100, avg=34.88, stdev= 7.45 00:38:32.418 clat percentiles (msec): 00:38:32.418 | 1.00th=[ 21], 5.00th=[ 25], 10.00th=[ 30], 20.00th=[ 31], 00:38:32.418 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 34], 00:38:32.418 | 70.00th=[ 36], 80.00th=[ 42], 90.00th=[ 45], 95.00th=[ 48], 00:38:32.418 | 99.00th=[ 54], 99.50th=[ 56], 99.90th=[ 101], 99.95th=[ 101], 00:38:32.418 | 99.99th=[ 101] 00:38:32.418 bw ( KiB/s): min= 1616, max= 2000, per=3.96%, avg=1837.40, stdev=94.12, samples=20 00:38:32.418 iops : min= 404, max= 500, avg=459.35, stdev=23.53, samples=20 00:38:32.418 lat (msec) : 20=0.63%, 50=96.20%, 100=3.04%, 250=0.13% 00:38:32.418 cpu : usr=98.81%, sys=0.81%, ctx=33, majf=0, minf=70 00:38:32.418 IO depths : 1=0.4%, 2=0.9%, 4=8.3%, 8=76.2%, 16=14.1%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.2%, 8=6.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename1: (groupid=0, jobs=1): err= 0: pid=3113674: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.4MiB/10011msec) 00:38:32.418 slat (nsec): min=5913, max=85816, avg=17667.38, stdev=11900.27 00:38:32.418 clat (usec): min=22699, max=71760, avg=32036.39, stdev=2778.86 00:38:32.418 lat (usec): min=22708, max=71768, avg=32054.06, stdev=2778.13 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[29230], 5.00th=[30278], 10.00th=[30540], 20.00th=[31065], 00:38:32.418 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:38:32.418 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:38:32.418 | 99.00th=[40109], 99.50th=[46400], 99.90th=[71828], 99.95th=[71828], 00:38:32.418 | 99.99th=[71828] 00:38:32.418 bw ( KiB/s): min= 1788, max= 2048, per=4.28%, avg=1983.25, stdev=77.85, samples=20 00:38:32.418 iops : min= 447, max= 512, avg=495.70, stdev=19.43, samples=20 00:38:32.418 lat (msec) : 50=99.62%, 100=0.38% 00:38:32.418 cpu : usr=99.05%, sys=0.66%, ctx=24, majf=0, minf=45 00:38:32.418 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename1: (groupid=0, jobs=1): err= 0: pid=3113675: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=529, BW=2117KiB/s (2168kB/s)(20.8MiB/10075msec) 00:38:32.418 slat (nsec): min=5825, max=71529, avg=11354.11, stdev=7692.72 00:38:32.418 clat (usec): min=2873, max=76384, avg=30069.05, stdev=5887.88 00:38:32.418 lat (usec): min=2899, max=76392, avg=30080.41, stdev=5888.05 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[ 5604], 5.00th=[20055], 10.00th=[21890], 20.00th=[30016], 00:38:32.418 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31589], 60.00th=[31851], 00:38:32.418 | 70.00th=[32113], 80.00th=[32637], 90.00th=[33424], 95.00th=[33817], 00:38:32.418 | 99.00th=[42730], 99.50th=[51119], 99.90th=[76022], 99.95th=[76022], 00:38:32.418 | 99.99th=[76022] 00:38:32.418 bw ( KiB/s): min= 1916, max= 2816, per=4.59%, avg=2127.35, stdev=191.29, samples=20 00:38:32.418 iops : min= 479, max= 704, avg=531.65, stdev=47.87, samples=20 00:38:32.418 lat (msec) : 4=0.30%, 10=1.71%, 20=2.63%, 50=94.84%, 100=0.53% 00:38:32.418 cpu : usr=98.94%, sys=0.72%, ctx=12, majf=0, minf=58 00:38:32.418 IO depths : 1=4.9%, 2=10.1%, 4=21.3%, 8=55.9%, 16=7.9%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=5332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename1: (groupid=0, jobs=1): err= 0: pid=3113676: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=538, BW=2156KiB/s (2207kB/s)(21.1MiB/10020msec) 00:38:32.418 slat (nsec): min=5825, max=81846, avg=10106.70, stdev=6183.60 00:38:32.418 clat (usec): min=5507, max=54813, avg=29600.27, stdev=4893.78 00:38:32.418 lat (usec): min=5520, max=54821, avg=29610.38, stdev=4894.24 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[11338], 5.00th=[19792], 10.00th=[21627], 20.00th=[27919], 00:38:32.418 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31589], 60.00th=[31851], 00:38:32.418 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:38:32.418 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[53740], 00:38:32.418 | 99.99th=[54789] 00:38:32.418 bw ( KiB/s): min= 1920, max= 2432, per=4.64%, avg=2153.05, stdev=141.78, samples=20 00:38:32.418 iops : min= 480, max= 608, avg=538.15, stdev=35.51, samples=20 00:38:32.418 lat (msec) : 10=0.89%, 20=4.85%, 50=94.19%, 100=0.07% 00:38:32.418 cpu : usr=99.10%, sys=0.61%, ctx=13, majf=0, minf=58 00:38:32.418 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=5400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename1: (groupid=0, jobs=1): err= 0: pid=3113677: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=456, BW=1828KiB/s (1871kB/s)(17.9MiB/10053msec) 00:38:32.418 slat (usec): min=5, max=100, avg=13.91, stdev=11.30 00:38:32.418 clat (msec): min=16, max=103, avg=34.91, stdev= 7.61 00:38:32.418 lat (msec): min=16, max=103, avg=34.93, stdev= 7.61 00:38:32.418 clat percentiles (msec): 00:38:32.418 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 32], 00:38:32.418 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:38:32.418 | 70.00th=[ 35], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 46], 00:38:32.418 | 99.00th=[ 61], 99.50th=[ 73], 99.90th=[ 104], 99.95th=[ 104], 00:38:32.418 | 99.99th=[ 104] 00:38:32.418 bw ( KiB/s): min= 1667, max= 1944, per=3.95%, avg=1831.15, stdev=74.49, samples=20 00:38:32.418 iops : min= 416, max= 486, avg=457.75, stdev=18.71, samples=20 00:38:32.418 lat (msec) : 20=0.44%, 50=97.02%, 100=2.26%, 250=0.28% 00:38:32.418 cpu : usr=99.05%, sys=0.63%, ctx=32, majf=0, minf=66 00:38:32.418 IO depths : 1=0.2%, 2=0.4%, 4=7.3%, 8=77.9%, 16=14.2%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.0%, 8=6.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113678: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.7MiB/10086msec) 00:38:32.418 slat (usec): min=5, max=501, avg=12.68, stdev=12.17 00:38:32.418 clat (msec): min=16, max=103, avg=33.58, stdev= 6.98 00:38:32.418 lat (msec): min=16, max=103, avg=33.59, stdev= 6.98 00:38:32.418 clat percentiles (msec): 00:38:32.418 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 31], 00:38:32.418 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:38:32.418 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 47], 00:38:32.418 | 99.00th=[ 54], 99.50th=[ 61], 99.90th=[ 104], 99.95th=[ 104], 00:38:32.418 | 99.99th=[ 104] 00:38:32.418 bw ( KiB/s): min= 1756, max= 2016, per=4.11%, avg=1906.60, stdev=70.01, samples=20 00:38:32.418 iops : min= 439, max= 504, avg=476.65, stdev=17.50, samples=20 00:38:32.418 lat (msec) : 20=0.61%, 50=96.53%, 100=2.74%, 250=0.13% 00:38:32.418 cpu : usr=95.22%, sys=2.31%, ctx=47, majf=0, minf=48 00:38:32.418 IO depths : 1=0.7%, 2=1.4%, 4=8.0%, 8=75.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.2%, 8=6.2%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113679: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=426, BW=1707KiB/s (1748kB/s)(16.8MiB/10087msec) 00:38:32.418 slat (nsec): min=5833, max=90840, avg=15561.08, stdev=11735.27 00:38:32.418 clat (usec): min=18460, max=97160, avg=37280.53, stdev=7118.23 00:38:32.418 lat (usec): min=18470, max=97168, avg=37296.09, stdev=7118.36 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[20841], 5.00th=[26608], 10.00th=[30540], 20.00th=[31589], 00:38:32.418 | 30.00th=[32113], 40.00th=[32900], 50.00th=[38536], 60.00th=[40633], 00:38:32.418 | 70.00th=[42206], 80.00th=[43254], 90.00th=[44827], 95.00th=[46924], 00:38:32.418 | 99.00th=[52691], 99.50th=[55837], 99.90th=[96994], 99.95th=[96994], 00:38:32.418 | 99.99th=[96994] 00:38:32.418 bw ( KiB/s): min= 1408, max= 1952, per=3.70%, avg=1715.00, stdev=167.61, samples=20 00:38:32.418 iops : min= 352, max= 488, avg=428.75, stdev=41.90, samples=20 00:38:32.418 lat (msec) : 20=0.65%, 50=97.98%, 100=1.37% 00:38:32.418 cpu : usr=99.10%, sys=0.59%, ctx=12, majf=0, minf=39 00:38:32.418 IO depths : 1=2.7%, 2=5.4%, 4=14.2%, 8=66.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=91.5%, 8=4.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113681: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=485, BW=1944KiB/s (1990kB/s)(19.1MiB/10082msec) 00:38:32.418 slat (nsec): min=5822, max=80664, avg=13018.46, stdev=10382.69 00:38:32.418 clat (usec): min=13487, max=94759, avg=32721.23, stdev=5777.14 00:38:32.418 lat (usec): min=13494, max=94766, avg=32734.25, stdev=5777.63 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[20579], 5.00th=[22938], 10.00th=[26346], 20.00th=[30540], 00:38:32.418 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32375], 00:38:32.418 | 70.00th=[32900], 80.00th=[33817], 90.00th=[41157], 95.00th=[43254], 00:38:32.418 | 99.00th=[48497], 99.50th=[50594], 99.90th=[94897], 99.95th=[94897], 00:38:32.418 | 99.99th=[94897] 00:38:32.418 bw ( KiB/s): min= 1788, max= 2048, per=4.21%, avg=1953.00, stdev=72.14, samples=20 00:38:32.418 iops : min= 447, max= 512, avg=488.25, stdev=18.03, samples=20 00:38:32.418 lat (msec) : 20=0.63%, 50=98.80%, 100=0.57% 00:38:32.418 cpu : usr=99.03%, sys=0.64%, ctx=19, majf=0, minf=47 00:38:32.418 IO depths : 1=2.1%, 2=4.2%, 4=12.1%, 8=69.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.9%, 8=4.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113682: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=495, BW=1982KiB/s (2030kB/s)(19.5MiB/10074msec) 00:38:32.418 slat (nsec): min=6059, max=82461, avg=12185.50, stdev=8402.43 00:38:32.418 clat (usec): min=27600, max=95958, avg=32182.40, stdev=4202.21 00:38:32.418 lat (usec): min=27606, max=95968, avg=32194.58, stdev=4202.50 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[29230], 5.00th=[30278], 10.00th=[30540], 20.00th=[30802], 00:38:32.418 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:38:32.418 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:38:32.418 | 99.00th=[35390], 99.50th=[64226], 99.90th=[95945], 99.95th=[95945], 00:38:32.418 | 99.99th=[95945] 00:38:32.418 bw ( KiB/s): min= 1792, max= 2048, per=4.29%, avg=1989.70, stdev=77.23, samples=20 00:38:32.418 iops : min= 448, max= 512, avg=497.35, stdev=19.26, samples=20 00:38:32.418 lat (msec) : 50=99.36%, 100=0.64% 00:38:32.418 cpu : usr=99.34%, sys=0.38%, ctx=13, majf=0, minf=52 00:38:32.418 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113683: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=474, BW=1896KiB/s (1942kB/s)(18.6MiB/10054msec) 00:38:32.418 slat (nsec): min=5813, max=81763, avg=12401.67, stdev=9875.75 00:38:32.418 clat (usec): min=17519, max=96855, avg=33634.65, stdev=5983.47 00:38:32.418 lat (usec): min=17525, max=96863, avg=33647.05, stdev=5983.59 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[24511], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:38:32.418 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:38:32.418 | 70.00th=[32900], 80.00th=[33817], 90.00th=[40109], 95.00th=[44827], 00:38:32.418 | 99.00th=[57410], 99.50th=[71828], 99.90th=[96994], 99.95th=[96994], 00:38:32.418 | 99.99th=[96994] 00:38:32.418 bw ( KiB/s): min= 1512, max= 2048, per=4.10%, avg=1901.50, stdev=143.06, samples=20 00:38:32.418 iops : min= 378, max= 512, avg=475.30, stdev=35.78, samples=20 00:38:32.418 lat (msec) : 20=0.17%, 50=97.55%, 100=2.29% 00:38:32.418 cpu : usr=99.13%, sys=0.51%, ctx=28, majf=0, minf=40 00:38:32.418 IO depths : 1=0.1%, 2=0.1%, 4=4.6%, 8=79.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=89.7%, 8=7.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113684: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.8MiB/10058msec) 00:38:32.418 slat (nsec): min=5694, max=89189, avg=16142.63, stdev=12563.97 00:38:32.418 clat (usec): min=18083, max=87811, avg=35233.94, stdev=7128.95 00:38:32.418 lat (usec): min=18091, max=87821, avg=35250.09, stdev=7127.95 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[21365], 5.00th=[25035], 10.00th=[29754], 20.00th=[30802], 00:38:32.418 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[33424], 00:38:32.418 | 70.00th=[39584], 80.00th=[41681], 90.00th=[43779], 95.00th=[46924], 00:38:32.418 | 99.00th=[55837], 99.50th=[58983], 99.90th=[87557], 99.95th=[87557], 00:38:32.418 | 99.99th=[87557] 00:38:32.418 bw ( KiB/s): min= 1664, max= 1948, per=3.91%, avg=1813.20, stdev=81.53, samples=20 00:38:32.418 iops : min= 416, max= 487, avg=453.30, stdev=20.38, samples=20 00:38:32.418 lat (msec) : 20=0.48%, 50=96.84%, 100=2.68% 00:38:32.418 cpu : usr=99.03%, sys=0.65%, ctx=18, majf=0, minf=62 00:38:32.418 IO depths : 1=1.0%, 2=2.0%, 4=9.8%, 8=73.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.6%, 8=5.8%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113685: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=455, BW=1823KiB/s (1866kB/s)(17.9MiB/10056msec) 00:38:32.418 slat (nsec): min=5851, max=95872, avg=15617.65, stdev=12662.02 00:38:32.418 clat (msec): min=16, max=112, avg=35.02, stdev= 7.43 00:38:32.418 lat (msec): min=16, max=112, avg=35.03, stdev= 7.42 00:38:32.418 clat percentiles (msec): 00:38:32.418 | 1.00th=[ 21], 5.00th=[ 26], 10.00th=[ 31], 20.00th=[ 32], 00:38:32.418 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:38:32.418 | 70.00th=[ 36], 80.00th=[ 42], 90.00th=[ 44], 95.00th=[ 46], 00:38:32.418 | 99.00th=[ 54], 99.50th=[ 62], 99.90th=[ 97], 99.95th=[ 113], 00:38:32.418 | 99.99th=[ 113] 00:38:32.418 bw ( KiB/s): min= 1664, max= 1944, per=3.93%, avg=1825.80, stdev=78.27, samples=20 00:38:32.418 iops : min= 416, max= 486, avg=456.45, stdev=19.57, samples=20 00:38:32.418 lat (msec) : 20=0.68%, 50=96.73%, 100=2.53%, 250=0.07% 00:38:32.418 cpu : usr=94.78%, sys=2.26%, ctx=22, majf=0, minf=40 00:38:32.418 IO depths : 1=1.0%, 2=2.3%, 4=11.4%, 8=72.6%, 16=12.7%, 32=0.0%, >=64=0.0% 00:38:32.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.418 issued rwts: total=4582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.418 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.418 filename2: (groupid=0, jobs=1): err= 0: pid=3113686: Wed May 15 10:32:16 2024 00:38:32.418 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.7MiB/10087msec) 00:38:32.418 slat (nsec): min=5832, max=95292, avg=16597.86, stdev=13126.42 00:38:32.418 clat (usec): min=16338, max=97159, avg=33662.93, stdev=6952.56 00:38:32.418 lat (usec): min=16347, max=97177, avg=33679.53, stdev=6952.61 00:38:32.418 clat percentiles (usec): 00:38:32.418 | 1.00th=[19792], 5.00th=[23725], 10.00th=[28181], 20.00th=[30540], 00:38:32.418 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32113], 60.00th=[32637], 00:38:32.418 | 70.00th=[33424], 80.00th=[38011], 90.00th=[42730], 95.00th=[46400], 00:38:32.418 | 99.00th=[51643], 99.50th=[53740], 99.90th=[96994], 99.95th=[96994], 00:38:32.418 | 99.99th=[96994] 00:38:32.418 bw ( KiB/s): min= 1768, max= 2048, per=4.10%, avg=1904.60, stdev=60.21, samples=20 00:38:32.418 iops : min= 442, max= 512, avg=476.15, stdev=15.05, samples=20 00:38:32.418 lat (msec) : 20=1.36%, 50=96.94%, 100=1.70% 00:38:32.418 cpu : usr=98.73%, sys=0.87%, ctx=70, majf=0, minf=74 00:38:32.418 IO depths : 1=0.5%, 2=0.9%, 4=7.3%, 8=76.6%, 16=14.7%, 32=0.0%, >=64=0.0% 00:38:32.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.419 complete : 0=0.0%, 4=90.1%, 8=6.9%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:32.419 issued rwts: total=4778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:32.419 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:32.419 00:38:32.419 Run status group 0 (all jobs): 00:38:32.419 READ: bw=45.3MiB/s (47.5MB/s), 1707KiB/s-2530KiB/s (1748kB/s-2591kB/s), io=457MiB (479MB), run=10006-10087msec 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 bdev_null0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 [2024-05-15 10:32:16.977738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 bdev_null1 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:32.419 { 00:38:32.419 "params": { 00:38:32.419 "name": "Nvme$subsystem", 00:38:32.419 "trtype": "$TEST_TRANSPORT", 00:38:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.419 "adrfam": "ipv4", 00:38:32.419 "trsvcid": "$NVMF_PORT", 00:38:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.419 "hdgst": ${hdgst:-false}, 00:38:32.419 "ddgst": ${ddgst:-false} 00:38:32.419 }, 00:38:32.419 "method": "bdev_nvme_attach_controller" 00:38:32.419 } 00:38:32.419 EOF 00:38:32.419 )") 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:32.419 { 00:38:32.419 "params": { 00:38:32.419 "name": "Nvme$subsystem", 00:38:32.419 "trtype": "$TEST_TRANSPORT", 00:38:32.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:32.419 "adrfam": "ipv4", 00:38:32.419 "trsvcid": "$NVMF_PORT", 00:38:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:32.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:32.419 "hdgst": ${hdgst:-false}, 00:38:32.419 "ddgst": ${ddgst:-false} 00:38:32.419 }, 00:38:32.419 "method": "bdev_nvme_attach_controller" 00:38:32.419 } 00:38:32.419 EOF 00:38:32.419 )") 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:32.419 "params": { 00:38:32.419 "name": "Nvme0", 00:38:32.419 "trtype": "tcp", 00:38:32.419 "traddr": "10.0.0.2", 00:38:32.419 "adrfam": "ipv4", 00:38:32.419 "trsvcid": "4420", 00:38:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:32.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:32.419 "hdgst": false, 00:38:32.419 "ddgst": false 00:38:32.419 }, 00:38:32.419 "method": "bdev_nvme_attach_controller" 00:38:32.419 },{ 00:38:32.419 "params": { 00:38:32.419 "name": "Nvme1", 00:38:32.419 "trtype": "tcp", 00:38:32.419 "traddr": "10.0.0.2", 00:38:32.419 "adrfam": "ipv4", 00:38:32.419 "trsvcid": "4420", 00:38:32.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:32.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:32.419 "hdgst": false, 00:38:32.419 "ddgst": false 00:38:32.419 }, 00:38:32.419 "method": "bdev_nvme_attach_controller" 00:38:32.419 }' 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:32.419 10:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:32.419 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:32.419 ... 00:38:32.419 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:32.419 ... 00:38:32.419 fio-3.35 00:38:32.419 Starting 4 threads 00:38:32.419 EAL: No free 2048 kB hugepages reported on node 1 00:38:37.764 00:38:37.764 filename0: (groupid=0, jobs=1): err= 0: pid=3116024: Wed May 15 10:32:23 2024 00:38:37.764 read: IOPS=1956, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5005msec) 00:38:37.764 slat (nsec): min=5653, max=35482, avg=6273.10, stdev=1562.87 00:38:37.764 clat (usec): min=1743, max=9972, avg=4070.75, stdev=660.70 00:38:37.764 lat (usec): min=1749, max=10008, avg=4077.02, stdev=660.66 00:38:37.764 clat percentiles (usec): 00:38:37.764 | 1.00th=[ 2671], 5.00th=[ 3032], 10.00th=[ 3261], 20.00th=[ 3556], 00:38:37.764 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 4047], 60.00th=[ 4178], 00:38:37.764 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5211], 00:38:37.764 | 99.00th=[ 5604], 99.50th=[ 5800], 99.90th=[ 6783], 99.95th=[ 9503], 00:38:37.764 | 99.99th=[10028] 00:38:37.764 bw ( KiB/s): min=15024, max=16192, per=23.94%, avg=15664.00, stdev=424.66, samples=10 00:38:37.764 iops : min= 1878, max= 2024, avg=1958.00, stdev=53.08, samples=10 00:38:37.764 lat (msec) : 2=0.05%, 4=47.84%, 10=52.11% 00:38:37.764 cpu : usr=97.52%, sys=2.14%, ctx=8, majf=0, minf=9 00:38:37.764 IO depths : 1=0.1%, 2=1.0%, 4=67.7%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.764 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.764 issued rwts: total=9793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:37.764 filename0: (groupid=0, jobs=1): err= 0: pid=3116025: Wed May 15 10:32:23 2024 00:38:37.764 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5003msec) 00:38:37.764 slat (nsec): min=5645, max=33938, avg=8334.32, stdev=2501.68 00:38:37.764 clat (usec): min=2230, max=7262, avg=4281.98, stdev=655.11 00:38:37.764 lat (usec): min=2239, max=7268, avg=4290.32, stdev=654.73 00:38:37.764 clat percentiles (usec): 00:38:37.764 | 1.00th=[ 2835], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3752], 00:38:37.764 | 30.00th=[ 3916], 40.00th=[ 4080], 50.00th=[ 4228], 60.00th=[ 4359], 00:38:37.764 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5407], 00:38:37.764 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6521], 99.95th=[ 6587], 00:38:37.764 | 99.99th=[ 7242] 00:38:37.764 bw ( KiB/s): min=14192, max=15248, per=22.67%, avg=14835.33, stdev=370.42, samples=9 00:38:37.764 iops : min= 1774, max= 1906, avg=1854.33, stdev=46.24, samples=9 00:38:37.764 lat (msec) : 4=34.21%, 10=65.79% 00:38:37.764 cpu : usr=96.66%, sys=2.94%, ctx=5, majf=0, minf=9 00:38:37.764 IO depths : 1=0.1%, 2=1.3%, 4=66.7%, 8=31.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.764 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.764 issued rwts: total=9298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.764 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:37.764 filename1: (groupid=0, jobs=1): err= 0: pid=3116026: Wed May 15 10:32:23 2024 00:38:37.764 read: IOPS=2531, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5002msec) 00:38:37.764 slat (nsec): min=5638, max=30109, avg=6084.21, stdev=1068.93 00:38:37.764 clat (usec): min=1320, max=47925, avg=3142.82, stdev=1245.41 00:38:37.764 lat (usec): min=1326, max=47954, avg=3148.91, stdev=1245.56 00:38:37.764 clat percentiles (usec): 00:38:37.764 | 1.00th=[ 1942], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:38:37.764 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3097], 60.00th=[ 3228], 00:38:37.764 | 70.00th=[ 3359], 80.00th=[ 3523], 90.00th=[ 3785], 95.00th=[ 4015], 00:38:37.764 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 5800], 99.95th=[47973], 00:38:37.764 | 99.99th=[47973] 00:38:37.764 bw ( KiB/s): min=19200, max=21760, per=31.14%, avg=20376.89, stdev=971.86, samples=9 00:38:37.764 iops : min= 2400, max= 2720, avg=2547.11, stdev=121.48, samples=9 00:38:37.764 lat (msec) : 2=1.28%, 4=93.32%, 10=5.34%, 50=0.06% 00:38:37.764 cpu : usr=97.78%, sys=1.96%, ctx=10, majf=0, minf=0 00:38:37.764 IO depths : 1=0.6%, 2=2.5%, 4=69.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.764 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.765 issued rwts: total=12662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.765 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:37.765 filename1: (groupid=0, jobs=1): err= 0: pid=3116027: Wed May 15 10:32:23 2024 00:38:37.765 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5002msec) 00:38:37.765 slat (nsec): min=5647, max=29387, avg=6150.19, stdev=1267.84 00:38:37.765 clat (usec): min=2123, max=47524, avg=4342.81, stdev=1425.42 00:38:37.765 lat (usec): min=2129, max=47552, avg=4348.96, stdev=1425.61 00:38:37.765 clat percentiles (usec): 00:38:37.765 | 1.00th=[ 2966], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3785], 00:38:37.765 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4424], 00:38:37.765 | 70.00th=[ 4621], 80.00th=[ 4817], 90.00th=[ 5145], 95.00th=[ 5407], 00:38:37.765 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 7701], 99.95th=[47449], 00:38:37.765 | 99.99th=[47449] 00:38:37.765 bw ( KiB/s): min=13056, max=15456, per=22.30%, avg=14593.78, stdev=725.16, samples=9 00:38:37.765 iops : min= 1632, max= 1932, avg=1824.22, stdev=90.64, samples=9 00:38:37.765 lat (msec) : 4=33.14%, 10=66.77%, 50=0.09% 00:38:37.765 cpu : usr=97.32%, sys=2.38%, ctx=11, majf=0, minf=9 00:38:37.765 IO depths : 1=0.1%, 2=1.5%, 4=66.1%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:37.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.765 complete : 0=0.0%, 4=95.4%, 8=4.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:37.765 issued rwts: total=9181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:37.765 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:37.765 00:38:37.765 Run status group 0 (all jobs): 00:38:37.765 READ: bw=63.9MiB/s (67.0MB/s), 14.3MiB/s-19.8MiB/s (15.0MB/s-20.7MB/s), io=320MiB (335MB), run=5002-5005msec 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:37.765 00:38:37.765 real 0m24.469s 00:38:37.765 user 5m20.203s 00:38:37.765 sys 0m4.258s 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:37.765 10:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:37.765 ************************************ 00:38:37.765 END TEST fio_dif_rand_params 00:38:37.765 ************************************ 00:38:38.027 10:32:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:38.027 10:32:23 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:38:38.027 10:32:23 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:38.027 10:32:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:38.027 ************************************ 00:38:38.027 START TEST fio_dif_digest 00:38:38.027 ************************************ 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:38.027 bdev_null0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:38.027 [2024-05-15 10:32:23.664713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.027 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:38.027 { 00:38:38.027 "params": { 00:38:38.027 "name": "Nvme$subsystem", 00:38:38.027 "trtype": "$TEST_TRANSPORT", 00:38:38.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:38.028 "adrfam": "ipv4", 00:38:38.028 "trsvcid": "$NVMF_PORT", 00:38:38.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:38.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:38.028 "hdgst": ${hdgst:-false}, 00:38:38.028 "ddgst": ${ddgst:-false} 00:38:38.028 }, 00:38:38.028 "method": "bdev_nvme_attach_controller" 00:38:38.028 } 00:38:38.028 EOF 00:38:38.028 )") 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:38.028 "params": { 00:38:38.028 "name": "Nvme0", 00:38:38.028 "trtype": "tcp", 00:38:38.028 "traddr": "10.0.0.2", 00:38:38.028 "adrfam": "ipv4", 00:38:38.028 "trsvcid": "4420", 00:38:38.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:38.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:38.028 "hdgst": true, 00:38:38.028 "ddgst": true 00:38:38.028 }, 00:38:38.028 "method": "bdev_nvme_attach_controller" 00:38:38.028 }' 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:38.028 10:32:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:38.597 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:38.597 ... 00:38:38.597 fio-3.35 00:38:38.597 Starting 3 threads 00:38:38.597 EAL: No free 2048 kB hugepages reported on node 1 00:38:50.849 00:38:50.849 filename0: (groupid=0, jobs=1): err= 0: pid=3117405: Wed May 15 10:32:34 2024 00:38:50.849 read: IOPS=132, BW=16.5MiB/s (17.3MB/s)(166MiB/10048msec) 00:38:50.849 slat (nsec): min=6075, max=37466, avg=6762.09, stdev=1177.75 00:38:50.849 clat (msec): min=8, max=102, avg=22.62, stdev=17.56 00:38:50.849 lat (msec): min=8, max=102, avg=22.63, stdev=17.56 00:38:50.849 clat percentiles (msec): 00:38:50.849 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:38:50.849 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:38:50.849 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 57], 95.00th=[ 59], 00:38:50.849 | 99.00th=[ 62], 99.50th=[ 97], 99.90th=[ 103], 99.95th=[ 103], 00:38:50.849 | 99.99th=[ 103] 00:38:50.849 bw ( KiB/s): min=12800, max=23296, per=32.56%, avg=16998.40, stdev=2963.88, samples=20 00:38:50.849 iops : min= 100, max= 182, avg=132.80, stdev=23.16, samples=20 00:38:50.849 lat (msec) : 10=3.16%, 20=76.17%, 50=1.65%, 100=18.87%, 250=0.15% 00:38:50.849 cpu : usr=97.02%, sys=2.70%, ctx=15, majf=0, minf=90 00:38:50.849 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.849 issued rwts: total=1330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.849 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:50.849 filename0: (groupid=0, jobs=1): err= 0: pid=3117406: Wed May 15 10:32:34 2024 00:38:50.849 read: IOPS=142, BW=17.8MiB/s (18.7MB/s)(179MiB/10054msec) 00:38:50.849 slat (nsec): min=6057, max=37298, avg=6868.87, stdev=1575.28 00:38:50.849 clat (msec): min=7, max=139, avg=21.01, stdev=16.83 00:38:50.849 lat (msec): min=7, max=139, avg=21.01, stdev=16.83 00:38:50.849 clat percentiles (msec): 00:38:50.849 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:38:50.849 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:38:50.849 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 56], 95.00th=[ 58], 00:38:50.849 | 99.00th=[ 62], 99.50th=[ 64], 99.90th=[ 103], 99.95th=[ 140], 00:38:50.849 | 99.99th=[ 140] 00:38:50.849 bw ( KiB/s): min=11520, max=23552, per=35.07%, avg=18304.00, stdev=3733.43, samples=20 00:38:50.849 iops : min= 90, max= 184, avg=143.00, stdev=29.17, samples=20 00:38:50.849 lat (msec) : 10=5.79%, 20=76.97%, 50=1.19%, 100=15.77%, 250=0.28% 00:38:50.849 cpu : usr=97.21%, sys=2.55%, ctx=17, majf=0, minf=184 00:38:50.849 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.849 issued rwts: total=1433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.849 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:50.849 filename0: (groupid=0, jobs=1): err= 0: pid=3117408: Wed May 15 10:32:34 2024 00:38:50.849 read: IOPS=133, BW=16.6MiB/s (17.4MB/s)(167MiB/10050msec) 00:38:50.849 slat (nsec): min=5921, max=32182, avg=6741.43, stdev=1181.38 00:38:50.849 clat (usec): min=9166, max=99537, avg=22506.95, stdev=16661.15 00:38:50.849 lat (usec): min=9172, max=99543, avg=22513.69, stdev=16661.14 00:38:50.849 clat percentiles (usec): 00:38:50.849 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11731], 20.00th=[12911], 00:38:50.850 | 30.00th=[13829], 40.00th=[14615], 50.00th=[15401], 60.00th=[16319], 00:38:50.850 | 70.00th=[17433], 80.00th=[20055], 90.00th=[56886], 95.00th=[58459], 00:38:50.850 | 99.00th=[60031], 99.50th=[61604], 99.90th=[99091], 99.95th=[99091], 00:38:50.850 | 99.99th=[99091] 00:38:50.850 bw ( KiB/s): min=11520, max=25088, per=32.72%, avg=17077.00, stdev=4115.20, samples=20 00:38:50.850 iops : min= 90, max= 196, avg=133.40, stdev=32.15, samples=20 00:38:50.850 lat (msec) : 10=0.67%, 20=79.28%, 50=1.87%, 100=18.18% 00:38:50.850 cpu : usr=96.51%, sys=3.21%, ctx=24, majf=0, minf=146 00:38:50.850 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:50.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:50.850 issued rwts: total=1337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:50.850 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:50.850 00:38:50.850 Run status group 0 (all jobs): 00:38:50.850 READ: bw=51.0MiB/s (53.5MB/s), 16.5MiB/s-17.8MiB/s (17.3MB/s-18.7MB/s), io=513MiB (537MB), run=10048-10054msec 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:38:50.850 00:38:50.850 real 0m11.222s 00:38:50.850 user 0m42.871s 00:38:50.850 sys 0m1.166s 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:50.850 10:32:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:50.850 ************************************ 00:38:50.850 END TEST fio_dif_digest 00:38:50.850 ************************************ 00:38:50.850 10:32:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:50.850 10:32:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:50.850 rmmod nvme_tcp 00:38:50.850 rmmod nvme_fabrics 00:38:50.850 rmmod nvme_keyring 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3107081 ']' 00:38:50.850 10:32:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3107081 00:38:50.850 10:32:34 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 3107081 ']' 00:38:50.850 10:32:34 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 3107081 00:38:50.850 10:32:34 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:38:50.850 10:32:34 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:38:50.850 10:32:34 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3107081 00:38:50.850 10:32:35 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:38:50.850 10:32:35 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:38:50.850 10:32:35 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3107081' 00:38:50.850 killing process with pid 3107081 00:38:50.850 10:32:35 nvmf_dif -- common/autotest_common.sh@966 -- # kill 3107081 00:38:50.850 [2024-05-15 10:32:35.026060] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:38:50.850 10:32:35 nvmf_dif -- common/autotest_common.sh@971 -- # wait 3107081 00:38:50.850 10:32:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:50.850 10:32:35 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:52.771 Waiting for block devices as requested 00:38:52.771 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:53.032 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:53.032 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:53.032 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:53.294 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:53.294 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:53.294 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:53.294 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:53.556 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:53.556 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:53.817 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:53.817 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:53.817 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:54.078 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:54.078 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:54.078 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:54.078 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:54.340 10:32:40 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:54.340 10:32:40 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:54.340 10:32:40 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:54.340 10:32:40 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:54.340 10:32:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.340 10:32:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:54.340 10:32:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.894 10:32:42 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:56.894 00:38:56.894 real 1m17.114s 00:38:56.894 user 8m5.920s 00:38:56.894 sys 0m19.044s 00:38:56.894 10:32:42 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:38:56.894 10:32:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:56.894 ************************************ 00:38:56.894 END TEST nvmf_dif 00:38:56.894 ************************************ 00:38:56.894 10:32:42 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:56.894 10:32:42 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:38:56.894 10:32:42 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:38:56.894 10:32:42 -- common/autotest_common.sh@10 -- # set +x 00:38:56.894 ************************************ 00:38:56.894 START TEST nvmf_abort_qd_sizes 00:38:56.894 ************************************ 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:56.894 * Looking for test storage... 00:38:56.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:38:56.894 10:32:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:03.491 10:32:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:03.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:03.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:03.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:03.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:03.491 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:03.492 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:03.753 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:03.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:03.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:39:03.753 00:39:03.753 --- 10.0.0.2 ping statistics --- 00:39:03.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.753 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:39:03.753 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:03.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:03.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:39:03.753 00:39:03.753 --- 10.0.0.1 ping statistics --- 00:39:03.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.753 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:39:03.753 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:03.753 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:03.753 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:03.753 10:32:49 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:07.095 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:07.095 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3126650 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3126650 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 3126650 ']' 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:07.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:07.357 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:07.357 [2024-05-15 10:32:53.110915] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:07.357 [2024-05-15 10:32:53.110963] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:07.357 EAL: No free 2048 kB hugepages reported on node 1 00:39:07.617 [2024-05-15 10:32:53.177347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:07.617 [2024-05-15 10:32:53.210206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:07.617 [2024-05-15 10:32:53.210247] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:07.617 [2024-05-15 10:32:53.210255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:07.617 [2024-05-15 10:32:53.210261] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:07.617 [2024-05-15 10:32:53.210268] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:07.617 [2024-05-15 10:32:53.210345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:07.617 [2024-05-15 10:32:53.210588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:07.617 [2024-05-15 10:32:53.210744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.617 [2024-05-15 10:32:53.210744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:08.190 10:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:08.190 ************************************ 00:39:08.190 START TEST spdk_target_abort 00:39:08.190 ************************************ 00:39:08.190 10:32:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:39:08.190 10:32:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:08.190 10:32:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:39:08.190 10:32:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.190 10:32:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.763 spdk_targetn1 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.763 [2024-05-15 10:32:54.278410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.763 [2024-05-15 10:32:54.318448] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:08.763 [2024-05-15 10:32:54.318711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:08.763 10:32:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:08.763 EAL: No free 2048 kB hugepages reported on node 1 00:39:08.763 [2024-05-15 10:32:54.554091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:504 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:08.763 [2024-05-15 10:32:54.554117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:39:08.763 [2024-05-15 10:32:54.555718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:528 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:08.763 [2024-05-15 10:32:54.555734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0044 p:1 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.600353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1424 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.600370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b3 p:1 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.615748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1648 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.615770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d0 p:1 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.629815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1952 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00f6 p:1 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.640688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2248 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.640705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.663256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2640 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.663279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.713750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3576 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.713768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00c1 p:0 m:0 dnr:0 00:39:09.025 [2024-05-15 10:32:54.721744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3712 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:39:09.025 [2024-05-15 10:32:54.721765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d1 p:0 m:0 dnr:0 00:39:12.333 Initializing NVMe Controllers 00:39:12.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:12.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:12.333 Initialization complete. Launching workers. 00:39:12.333 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6799, failed: 9 00:39:12.333 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2500, failed to submit 4308 00:39:12.333 success 857, unsuccess 1643, failed 0 00:39:12.333 10:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:12.333 10:32:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:12.333 EAL: No free 2048 kB hugepages reported on node 1 00:39:12.333 [2024-05-15 10:32:57.708532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:672 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:39:12.333 [2024-05-15 10:32:57.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:39:12.333 [2024-05-15 10:32:57.796409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2616 len:8 PRP1 0x200007c48000 PRP2 0x0 00:39:12.333 [2024-05-15 10:32:57.796436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:39:15.639 Initializing NVMe Controllers 00:39:15.639 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:15.639 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:15.640 Initialization complete. Launching workers. 00:39:15.640 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8669, failed: 2 00:39:15.640 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7453 00:39:15.640 success 365, unsuccess 853, failed 0 00:39:15.640 10:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:15.640 10:33:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:15.640 EAL: No free 2048 kB hugepages reported on node 1 00:39:15.640 [2024-05-15 10:33:01.087711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:170 nsid:1 lba:1352 len:8 PRP1 0x2000078c8000 PRP2 0x0 00:39:15.640 [2024-05-15 10:33:01.087763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:170 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:39:17.027 [2024-05-15 10:33:02.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:180 nsid:1 lba:165512 len:8 PRP1 0x200007900000 PRP2 0x0 00:39:17.027 [2024-05-15 10:33:02.807990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:180 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:39:18.414 Initializing NVMe Controllers 00:39:18.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:18.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:18.414 Initialization complete. Launching workers. 00:39:18.414 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 36314, failed: 2 00:39:18.414 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2785, failed to submit 33531 00:39:18.414 success 752, unsuccess 2033, failed 0 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:18.414 10:33:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3126650 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 3126650 ']' 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 3126650 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:20.333 10:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3126650 00:39:20.333 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:20.333 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:20.333 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3126650' 00:39:20.333 killing process with pid 3126650 00:39:20.333 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 3126650 00:39:20.333 [2024-05-15 10:33:06.017385] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:20.333 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 3126650 00:39:20.596 00:39:20.596 real 0m12.166s 00:39:20.596 user 0m49.261s 00:39:20.596 sys 0m2.053s 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:20.596 ************************************ 00:39:20.596 END TEST spdk_target_abort 00:39:20.596 ************************************ 00:39:20.596 10:33:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:20.596 10:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:39:20.596 10:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:20.596 10:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:20.596 ************************************ 00:39:20.596 START TEST kernel_target_abort 00:39:20.596 ************************************ 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:20.596 10:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:23.906 Waiting for block devices as requested 00:39:23.906 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:23.906 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:23.906 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:24.168 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:24.168 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:24.168 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:24.430 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:24.430 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:24.430 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:24.691 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:24.691 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:24.691 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:24.953 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:24.953 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:24.953 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:24.953 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:25.214 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:25.522 No valid GPT data, bailing 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:39:25.522 00:39:25.522 Discovery Log Number of Records 2, Generation counter 2 00:39:25.522 =====Discovery Log Entry 0====== 00:39:25.522 trtype: tcp 00:39:25.522 adrfam: ipv4 00:39:25.522 subtype: current discovery subsystem 00:39:25.522 treq: not specified, sq flow control disable supported 00:39:25.522 portid: 1 00:39:25.522 trsvcid: 4420 00:39:25.522 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:25.522 traddr: 10.0.0.1 00:39:25.522 eflags: none 00:39:25.522 sectype: none 00:39:25.522 =====Discovery Log Entry 1====== 00:39:25.522 trtype: tcp 00:39:25.522 adrfam: ipv4 00:39:25.522 subtype: nvme subsystem 00:39:25.522 treq: not specified, sq flow control disable supported 00:39:25.522 portid: 1 00:39:25.522 trsvcid: 4420 00:39:25.522 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:25.522 traddr: 10.0.0.1 00:39:25.522 eflags: none 00:39:25.522 sectype: none 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:25.522 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:25.523 10:33:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:25.523 EAL: No free 2048 kB hugepages reported on node 1 00:39:28.831 Initializing NVMe Controllers 00:39:28.831 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:28.831 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:28.831 Initialization complete. Launching workers. 00:39:28.831 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28124, failed: 0 00:39:28.831 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28124, failed to submit 0 00:39:28.831 success 0, unsuccess 28124, failed 0 00:39:28.831 10:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:28.831 10:33:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:28.831 EAL: No free 2048 kB hugepages reported on node 1 00:39:32.137 Initializing NVMe Controllers 00:39:32.138 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:32.138 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:32.138 Initialization complete. Launching workers. 00:39:32.138 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 61833, failed: 0 00:39:32.138 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15582, failed to submit 46251 00:39:32.138 success 0, unsuccess 15582, failed 0 00:39:32.138 10:33:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:32.138 10:33:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:32.138 EAL: No free 2048 kB hugepages reported on node 1 00:39:35.447 Initializing NVMe Controllers 00:39:35.447 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:35.447 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:35.447 Initialization complete. Launching workers. 00:39:35.447 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 60508, failed: 0 00:39:35.447 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15094, failed to submit 45414 00:39:35.447 success 0, unsuccess 15094, failed 0 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:35.447 10:33:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:38.757 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:38.757 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:40.147 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:40.409 00:39:40.409 real 0m19.900s 00:39:40.409 user 0m6.054s 00:39:40.409 sys 0m6.661s 00:39:40.409 10:33:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:40.409 10:33:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:40.409 ************************************ 00:39:40.409 END TEST kernel_target_abort 00:39:40.409 ************************************ 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:40.409 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:40.409 rmmod nvme_tcp 00:39:40.409 rmmod nvme_fabrics 00:39:40.672 rmmod nvme_keyring 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3126650 ']' 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3126650 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 3126650 ']' 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 3126650 00:39:40.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (3126650) - No such process 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 3126650 is not found' 00:39:40.672 Process with pid 3126650 is not found 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:40.672 10:33:26 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:43.987 Waiting for block devices as requested 00:39:43.987 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:43.987 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:43.987 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:44.250 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:44.250 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:44.250 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:44.513 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:44.513 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:44.513 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:44.774 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:44.774 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:44.774 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:45.036 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:45.036 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:45.036 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:45.036 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:45.297 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:45.559 10:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:47.478 10:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:47.478 00:39:47.478 real 0m50.962s 00:39:47.478 user 1m0.506s 00:39:47.478 sys 0m19.043s 00:39:47.478 10:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:47.478 10:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:47.478 ************************************ 00:39:47.478 END TEST nvmf_abort_qd_sizes 00:39:47.478 ************************************ 00:39:47.478 10:33:33 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:47.478 10:33:33 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:39:47.478 10:33:33 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:39:47.478 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:39:47.744 ************************************ 00:39:47.744 START TEST keyring_file 00:39:47.744 ************************************ 00:39:47.744 10:33:33 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:47.744 * Looking for test storage... 00:39:47.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:47.745 10:33:33 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:47.745 10:33:33 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:47.745 10:33:33 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:47.745 10:33:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.745 10:33:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.745 10:33:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.745 10:33:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:47.745 10:33:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@47 -- # : 0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ep7oGYkLuW 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ep7oGYkLuW 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ep7oGYkLuW 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Ep7oGYkLuW 00:39:47.745 10:33:33 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WZ56uWgYFu 00:39:47.745 10:33:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:47.745 10:33:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:48.049 10:33:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WZ56uWgYFu 00:39:48.049 10:33:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WZ56uWgYFu 00:39:48.049 10:33:33 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WZ56uWgYFu 00:39:48.049 10:33:33 keyring_file -- keyring/file.sh@30 -- # tgtpid=3137598 00:39:48.049 10:33:33 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3137598 00:39:48.049 10:33:33 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:48.049 10:33:33 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 3137598 ']' 00:39:48.049 10:33:33 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.049 10:33:33 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:48.049 10:33:33 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.049 10:33:33 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:48.049 10:33:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:48.049 [2024-05-15 10:33:33.615210] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:48.049 [2024-05-15 10:33:33.615284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137598 ] 00:39:48.049 EAL: No free 2048 kB hugepages reported on node 1 00:39:48.049 [2024-05-15 10:33:33.676853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.049 [2024-05-15 10:33:33.709793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:39:48.623 10:33:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:48.623 [2024-05-15 10:33:34.366304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:48.623 null0 00:39:48.623 [2024-05-15 10:33:34.398325] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:39:48.623 [2024-05-15 10:33:34.398373] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:48.623 [2024-05-15 10:33:34.398623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:48.623 [2024-05-15 10:33:34.406359] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:39:48.623 10:33:34 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:39:48.623 10:33:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:48.885 [2024-05-15 10:33:34.418391] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:48.885 request: 00:39:48.885 { 00:39:48.885 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:48.885 "secure_channel": false, 00:39:48.885 "listen_address": { 00:39:48.885 "trtype": "tcp", 00:39:48.885 "traddr": "127.0.0.1", 00:39:48.885 "trsvcid": "4420" 00:39:48.885 }, 00:39:48.885 "method": "nvmf_subsystem_add_listener", 00:39:48.885 "req_id": 1 00:39:48.885 } 00:39:48.885 Got JSON-RPC error response 00:39:48.885 response: 00:39:48.885 { 00:39:48.885 "code": -32602, 00:39:48.885 "message": "Invalid parameters" 00:39:48.885 } 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:48.885 10:33:34 keyring_file -- keyring/file.sh@46 -- # bperfpid=3137756 00:39:48.885 10:33:34 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3137756 /var/tmp/bperf.sock 00:39:48.885 10:33:34 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 3137756 ']' 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:48.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:48.885 [2024-05-15 10:33:34.473209] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:48.885 [2024-05-15 10:33:34.473255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137756 ] 00:39:48.885 EAL: No free 2048 kB hugepages reported on node 1 00:39:48.885 [2024-05-15 10:33:34.547748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.885 [2024-05-15 10:33:34.578509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:48.885 10:33:34 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:39:48.885 10:33:34 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:48.885 10:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:49.146 10:33:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WZ56uWgYFu 00:39:49.146 10:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WZ56uWgYFu 00:39:49.407 10:33:34 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:39:49.407 10:33:34 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:39:49.407 10:33:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.407 10:33:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.407 10:33:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.407 10:33:35 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.Ep7oGYkLuW == \/\t\m\p\/\t\m\p\.\E\p\7\o\G\Y\k\L\u\W ]] 00:39:49.407 10:33:35 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:39:49.407 10:33:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:49.407 10:33:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.407 10:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.407 10:33:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:49.669 10:33:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WZ56uWgYFu == \/\t\m\p\/\t\m\p\.\W\Z\5\6\u\W\g\Y\F\u ]] 00:39:49.670 10:33:35 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:39:49.670 10:33:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:49.670 10:33:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.670 10:33:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.670 10:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.670 10:33:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:49.931 10:33:35 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:39:49.931 10:33:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:39:49.931 10:33:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:49.931 10:33:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:49.931 10:33:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:49.931 10:33:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:49.931 10:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:49.931 10:33:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:49.931 10:33:35 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:49.931 10:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:50.192 [2024-05-15 10:33:35.787424] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:50.192 nvme0n1 00:39:50.192 10:33:35 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:39:50.192 10:33:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:50.192 10:33:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.192 10:33:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.192 10:33:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.192 10:33:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:50.453 10:33:36 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:39:50.453 10:33:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:39:50.453 10:33:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:50.453 10:33:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:50.453 10:33:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:50.453 10:33:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:50.453 10:33:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:50.453 10:33:36 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:39:50.453 10:33:36 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:50.715 Running I/O for 1 seconds... 00:39:51.659 00:39:51.659 Latency(us) 00:39:51.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:51.659 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:51.659 nvme0n1 : 1.03 3823.68 14.94 0.00 0.00 32952.75 6171.31 47622.83 00:39:51.659 =================================================================================================================== 00:39:51.659 Total : 3823.68 14.94 0.00 0.00 32952.75 6171.31 47622.83 00:39:51.659 0 00:39:51.659 10:33:37 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:51.659 10:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:51.922 10:33:37 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:51.922 10:33:37 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:39:51.922 10:33:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:51.922 10:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.184 10:33:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:52.184 10:33:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:52.184 10:33:37 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.184 10:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:52.184 [2024-05-15 10:33:37.970488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:52.184 [2024-05-15 10:33:37.970574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c1f30 (107): Transport endpoint is not connected 00:39:52.184 [2024-05-15 10:33:37.971570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c1f30 (9): Bad file descriptor 00:39:52.184 [2024-05-15 10:33:37.972571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:39:52.184 [2024-05-15 10:33:37.972579] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:52.184 [2024-05-15 10:33:37.972584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:39:52.184 request: 00:39:52.184 { 00:39:52.184 "name": "nvme0", 00:39:52.184 "trtype": "tcp", 00:39:52.184 "traddr": "127.0.0.1", 00:39:52.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:52.184 "adrfam": "ipv4", 00:39:52.184 "trsvcid": "4420", 00:39:52.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:52.184 "psk": "key1", 00:39:52.184 "method": "bdev_nvme_attach_controller", 00:39:52.184 "req_id": 1 00:39:52.184 } 00:39:52.184 Got JSON-RPC error response 00:39:52.184 response: 00:39:52.184 { 00:39:52.184 "code": -32602, 00:39:52.184 "message": "Invalid parameters" 00:39:52.184 } 00:39:52.446 10:33:37 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:39:52.446 10:33:37 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:52.446 10:33:37 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:52.446 10:33:37 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:52.446 10:33:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:39:52.446 10:33:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:52.446 10:33:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.446 10:33:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.446 10:33:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.446 10:33:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:52.446 10:33:38 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:39:52.446 10:33:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:39:52.446 10:33:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:52.446 10:33:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:52.446 10:33:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:52.446 10:33:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:52.446 10:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:52.707 10:33:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:52.707 10:33:38 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:39:52.707 10:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:52.708 10:33:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:39:52.708 10:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:52.969 10:33:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:39:52.969 10:33:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:39:52.969 10:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.230 10:33:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:39:53.230 10:33:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.Ep7oGYkLuW 00:39:53.230 10:33:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:53.230 10:33:38 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:53.231 10:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:53.231 [2024-05-15 10:33:38.930337] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Ep7oGYkLuW': 0100660 00:39:53.231 [2024-05-15 10:33:38.930353] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:53.231 request: 00:39:53.231 { 00:39:53.231 "name": "key0", 00:39:53.231 "path": "/tmp/tmp.Ep7oGYkLuW", 00:39:53.231 "method": "keyring_file_add_key", 00:39:53.231 "req_id": 1 00:39:53.231 } 00:39:53.231 Got JSON-RPC error response 00:39:53.231 response: 00:39:53.231 { 00:39:53.231 "code": -1, 00:39:53.231 "message": "Operation not permitted" 00:39:53.231 } 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:53.231 10:33:38 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:53.231 10:33:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.Ep7oGYkLuW 00:39:53.231 10:33:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:53.231 10:33:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ep7oGYkLuW 00:39:53.492 10:33:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.Ep7oGYkLuW 00:39:53.492 10:33:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:39:53.492 10:33:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:53.492 10:33:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:53.492 10:33:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:53.492 10:33:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:53.492 10:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:53.492 10:33:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:39:53.492 10:33:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:39:53.492 10:33:39 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:53.492 10:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:53.754 [2024-05-15 10:33:39.395577] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Ep7oGYkLuW': No such file or directory 00:39:53.754 [2024-05-15 10:33:39.395591] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:53.754 [2024-05-15 10:33:39.395607] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:53.754 [2024-05-15 10:33:39.395612] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:53.754 [2024-05-15 10:33:39.395616] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:53.754 request: 00:39:53.754 { 00:39:53.754 "name": "nvme0", 00:39:53.754 "trtype": "tcp", 00:39:53.754 "traddr": "127.0.0.1", 00:39:53.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:53.754 "adrfam": "ipv4", 00:39:53.754 "trsvcid": "4420", 00:39:53.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:53.754 "psk": "key0", 00:39:53.754 "method": "bdev_nvme_attach_controller", 00:39:53.754 "req_id": 1 00:39:53.754 } 00:39:53.754 Got JSON-RPC error response 00:39:53.754 response: 00:39:53.754 { 00:39:53.754 "code": -19, 00:39:53.754 "message": "No such device" 00:39:53.754 } 00:39:53.754 10:33:39 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:39:53.754 10:33:39 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:39:53.754 10:33:39 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:39:53.754 10:33:39 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:39:53.754 10:33:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:39:53.754 10:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:54.016 10:33:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FRrZd1HMIi 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:54.016 10:33:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:54.016 10:33:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:39:54.016 10:33:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:39:54.016 10:33:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:39:54.016 10:33:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:39:54.016 10:33:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FRrZd1HMIi 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FRrZd1HMIi 00:39:54.016 10:33:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.FRrZd1HMIi 00:39:54.016 10:33:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FRrZd1HMIi 00:39:54.016 10:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FRrZd1HMIi 00:39:54.278 10:33:39 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.278 10:33:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:54.278 nvme0n1 00:39:54.278 10:33:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:39:54.278 10:33:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:54.278 10:33:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.278 10:33:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.278 10:33:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:54.278 10:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.540 10:33:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:39:54.540 10:33:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:39:54.540 10:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:54.802 10:33:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:39:54.802 10:33:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.802 10:33:40 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:39:54.802 10:33:40 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:54.802 10:33:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:55.063 10:33:40 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:39:55.063 10:33:40 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:55.063 10:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:55.063 10:33:40 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:39:55.063 10:33:40 keyring_file -- keyring/file.sh@104 -- # jq length 00:39:55.063 10:33:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:55.325 10:33:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:39:55.325 10:33:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FRrZd1HMIi 00:39:55.325 10:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FRrZd1HMIi 00:39:55.586 10:33:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WZ56uWgYFu 00:39:55.586 10:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WZ56uWgYFu 00:39:55.586 10:33:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.586 10:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:55.848 nvme0n1 00:39:55.848 10:33:41 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:39:55.848 10:33:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:56.110 10:33:41 keyring_file -- keyring/file.sh@112 -- # config='{ 00:39:56.110 "subsystems": [ 00:39:56.110 { 00:39:56.110 "subsystem": "keyring", 00:39:56.110 "config": [ 00:39:56.110 { 00:39:56.110 "method": "keyring_file_add_key", 00:39:56.110 "params": { 00:39:56.110 "name": "key0", 00:39:56.110 "path": "/tmp/tmp.FRrZd1HMIi" 00:39:56.110 } 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "method": "keyring_file_add_key", 00:39:56.110 "params": { 00:39:56.110 "name": "key1", 00:39:56.110 "path": "/tmp/tmp.WZ56uWgYFu" 00:39:56.110 } 00:39:56.110 } 00:39:56.110 ] 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "subsystem": "iobuf", 00:39:56.110 "config": [ 00:39:56.110 { 00:39:56.110 "method": "iobuf_set_options", 00:39:56.110 "params": { 00:39:56.110 "small_pool_count": 8192, 00:39:56.110 "large_pool_count": 1024, 00:39:56.110 "small_bufsize": 8192, 00:39:56.110 "large_bufsize": 135168 00:39:56.110 } 00:39:56.110 } 00:39:56.110 ] 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "subsystem": "sock", 00:39:56.110 "config": [ 00:39:56.110 { 00:39:56.110 "method": "sock_impl_set_options", 00:39:56.110 "params": { 00:39:56.110 "impl_name": "posix", 00:39:56.110 "recv_buf_size": 2097152, 00:39:56.110 "send_buf_size": 2097152, 00:39:56.110 "enable_recv_pipe": true, 00:39:56.110 "enable_quickack": false, 00:39:56.110 "enable_placement_id": 0, 00:39:56.110 "enable_zerocopy_send_server": true, 00:39:56.110 "enable_zerocopy_send_client": false, 00:39:56.110 "zerocopy_threshold": 0, 00:39:56.110 "tls_version": 0, 00:39:56.110 "enable_ktls": false 00:39:56.110 } 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "method": "sock_impl_set_options", 00:39:56.110 "params": { 00:39:56.110 "impl_name": "ssl", 00:39:56.110 "recv_buf_size": 4096, 00:39:56.110 "send_buf_size": 4096, 00:39:56.110 "enable_recv_pipe": true, 00:39:56.110 "enable_quickack": false, 00:39:56.110 "enable_placement_id": 0, 00:39:56.110 "enable_zerocopy_send_server": true, 00:39:56.110 "enable_zerocopy_send_client": false, 00:39:56.110 "zerocopy_threshold": 0, 00:39:56.110 "tls_version": 0, 00:39:56.110 "enable_ktls": false 00:39:56.110 } 00:39:56.110 } 00:39:56.110 ] 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "subsystem": "vmd", 00:39:56.110 "config": [] 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "subsystem": "accel", 00:39:56.110 "config": [ 00:39:56.110 { 00:39:56.110 "method": "accel_set_options", 00:39:56.110 "params": { 00:39:56.110 "small_cache_size": 128, 00:39:56.110 "large_cache_size": 16, 00:39:56.110 "task_count": 2048, 00:39:56.110 "sequence_count": 2048, 00:39:56.110 "buf_count": 2048 00:39:56.110 } 00:39:56.110 } 00:39:56.110 ] 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "subsystem": "bdev", 00:39:56.110 "config": [ 00:39:56.110 { 00:39:56.110 "method": "bdev_set_options", 00:39:56.110 "params": { 00:39:56.110 "bdev_io_pool_size": 65535, 00:39:56.110 "bdev_io_cache_size": 256, 00:39:56.110 "bdev_auto_examine": true, 00:39:56.110 "iobuf_small_cache_size": 128, 00:39:56.110 "iobuf_large_cache_size": 16 00:39:56.110 } 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "method": "bdev_raid_set_options", 00:39:56.110 "params": { 00:39:56.110 "process_window_size_kb": 1024 00:39:56.110 } 00:39:56.110 }, 00:39:56.110 { 00:39:56.110 "method": "bdev_iscsi_set_options", 00:39:56.110 "params": { 00:39:56.110 "timeout_sec": 30 00:39:56.110 } 00:39:56.110 }, 00:39:56.110 { 00:39:56.111 "method": "bdev_nvme_set_options", 00:39:56.111 "params": { 00:39:56.111 "action_on_timeout": "none", 00:39:56.111 "timeout_us": 0, 00:39:56.111 "timeout_admin_us": 0, 00:39:56.111 "keep_alive_timeout_ms": 10000, 00:39:56.111 "arbitration_burst": 0, 00:39:56.111 "low_priority_weight": 0, 00:39:56.111 "medium_priority_weight": 0, 00:39:56.111 "high_priority_weight": 0, 00:39:56.111 "nvme_adminq_poll_period_us": 10000, 00:39:56.111 "nvme_ioq_poll_period_us": 0, 00:39:56.111 "io_queue_requests": 512, 00:39:56.111 "delay_cmd_submit": true, 00:39:56.111 "transport_retry_count": 4, 00:39:56.111 "bdev_retry_count": 3, 00:39:56.111 "transport_ack_timeout": 0, 00:39:56.111 "ctrlr_loss_timeout_sec": 0, 00:39:56.111 "reconnect_delay_sec": 0, 00:39:56.111 "fast_io_fail_timeout_sec": 0, 00:39:56.111 "disable_auto_failback": false, 00:39:56.111 "generate_uuids": false, 00:39:56.111 "transport_tos": 0, 00:39:56.111 "nvme_error_stat": false, 00:39:56.111 "rdma_srq_size": 0, 00:39:56.111 "io_path_stat": false, 00:39:56.111 "allow_accel_sequence": false, 00:39:56.111 "rdma_max_cq_size": 0, 00:39:56.111 "rdma_cm_event_timeout_ms": 0, 00:39:56.111 "dhchap_digests": [ 00:39:56.111 "sha256", 00:39:56.111 "sha384", 00:39:56.111 "sha512" 00:39:56.111 ], 00:39:56.111 "dhchap_dhgroups": [ 00:39:56.111 "null", 00:39:56.111 "ffdhe2048", 00:39:56.111 "ffdhe3072", 00:39:56.111 "ffdhe4096", 00:39:56.111 "ffdhe6144", 00:39:56.111 "ffdhe8192" 00:39:56.111 ] 00:39:56.111 } 00:39:56.111 }, 00:39:56.111 { 00:39:56.111 "method": "bdev_nvme_attach_controller", 00:39:56.111 "params": { 00:39:56.111 "name": "nvme0", 00:39:56.111 "trtype": "TCP", 00:39:56.111 "adrfam": "IPv4", 00:39:56.111 "traddr": "127.0.0.1", 00:39:56.111 "trsvcid": "4420", 00:39:56.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.111 "prchk_reftag": false, 00:39:56.111 "prchk_guard": false, 00:39:56.111 "ctrlr_loss_timeout_sec": 0, 00:39:56.111 "reconnect_delay_sec": 0, 00:39:56.111 "fast_io_fail_timeout_sec": 0, 00:39:56.111 "psk": "key0", 00:39:56.111 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:56.111 "hdgst": false, 00:39:56.111 "ddgst": false 00:39:56.111 } 00:39:56.111 }, 00:39:56.111 { 00:39:56.111 "method": "bdev_nvme_set_hotplug", 00:39:56.111 "params": { 00:39:56.111 "period_us": 100000, 00:39:56.111 "enable": false 00:39:56.111 } 00:39:56.111 }, 00:39:56.111 { 00:39:56.111 "method": "bdev_wait_for_examine" 00:39:56.111 } 00:39:56.111 ] 00:39:56.111 }, 00:39:56.111 { 00:39:56.111 "subsystem": "nbd", 00:39:56.111 "config": [] 00:39:56.111 } 00:39:56.111 ] 00:39:56.111 }' 00:39:56.111 10:33:41 keyring_file -- keyring/file.sh@114 -- # killprocess 3137756 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 3137756 ']' 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@951 -- # kill -0 3137756 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@952 -- # uname 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3137756 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3137756' 00:39:56.111 killing process with pid 3137756 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@966 -- # kill 3137756 00:39:56.111 Received shutdown signal, test time was about 1.000000 seconds 00:39:56.111 00:39:56.111 Latency(us) 00:39:56.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:56.111 =================================================================================================================== 00:39:56.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:56.111 10:33:41 keyring_file -- common/autotest_common.sh@971 -- # wait 3137756 00:39:56.373 10:33:41 keyring_file -- keyring/file.sh@117 -- # bperfpid=3139240 00:39:56.373 10:33:41 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3139240 /var/tmp/bperf.sock 00:39:56.373 10:33:41 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 3139240 ']' 00:39:56.373 10:33:41 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:56.373 10:33:41 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:39:56.373 10:33:41 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:56.373 10:33:41 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:56.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:56.373 10:33:41 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:39:56.373 10:33:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:56.373 10:33:41 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:39:56.373 "subsystems": [ 00:39:56.373 { 00:39:56.373 "subsystem": "keyring", 00:39:56.373 "config": [ 00:39:56.373 { 00:39:56.373 "method": "keyring_file_add_key", 00:39:56.373 "params": { 00:39:56.373 "name": "key0", 00:39:56.373 "path": "/tmp/tmp.FRrZd1HMIi" 00:39:56.373 } 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "method": "keyring_file_add_key", 00:39:56.373 "params": { 00:39:56.373 "name": "key1", 00:39:56.373 "path": "/tmp/tmp.WZ56uWgYFu" 00:39:56.373 } 00:39:56.373 } 00:39:56.373 ] 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "subsystem": "iobuf", 00:39:56.373 "config": [ 00:39:56.373 { 00:39:56.373 "method": "iobuf_set_options", 00:39:56.373 "params": { 00:39:56.373 "small_pool_count": 8192, 00:39:56.373 "large_pool_count": 1024, 00:39:56.373 "small_bufsize": 8192, 00:39:56.373 "large_bufsize": 135168 00:39:56.373 } 00:39:56.373 } 00:39:56.373 ] 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "subsystem": "sock", 00:39:56.373 "config": [ 00:39:56.373 { 00:39:56.373 "method": "sock_impl_set_options", 00:39:56.373 "params": { 00:39:56.373 "impl_name": "posix", 00:39:56.373 "recv_buf_size": 2097152, 00:39:56.373 "send_buf_size": 2097152, 00:39:56.373 "enable_recv_pipe": true, 00:39:56.373 "enable_quickack": false, 00:39:56.373 "enable_placement_id": 0, 00:39:56.373 "enable_zerocopy_send_server": true, 00:39:56.373 "enable_zerocopy_send_client": false, 00:39:56.373 "zerocopy_threshold": 0, 00:39:56.373 "tls_version": 0, 00:39:56.373 "enable_ktls": false 00:39:56.373 } 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "method": "sock_impl_set_options", 00:39:56.373 "params": { 00:39:56.373 "impl_name": "ssl", 00:39:56.373 "recv_buf_size": 4096, 00:39:56.373 "send_buf_size": 4096, 00:39:56.373 "enable_recv_pipe": true, 00:39:56.373 "enable_quickack": false, 00:39:56.373 "enable_placement_id": 0, 00:39:56.373 "enable_zerocopy_send_server": true, 00:39:56.373 "enable_zerocopy_send_client": false, 00:39:56.373 "zerocopy_threshold": 0, 00:39:56.373 "tls_version": 0, 00:39:56.373 "enable_ktls": false 00:39:56.373 } 00:39:56.373 } 00:39:56.373 ] 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "subsystem": "vmd", 00:39:56.373 "config": [] 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "subsystem": "accel", 00:39:56.373 "config": [ 00:39:56.373 { 00:39:56.373 "method": "accel_set_options", 00:39:56.373 "params": { 00:39:56.373 "small_cache_size": 128, 00:39:56.373 "large_cache_size": 16, 00:39:56.373 "task_count": 2048, 00:39:56.373 "sequence_count": 2048, 00:39:56.373 "buf_count": 2048 00:39:56.373 } 00:39:56.373 } 00:39:56.373 ] 00:39:56.373 }, 00:39:56.373 { 00:39:56.373 "subsystem": "bdev", 00:39:56.374 "config": [ 00:39:56.374 { 00:39:56.374 "method": "bdev_set_options", 00:39:56.374 "params": { 00:39:56.374 "bdev_io_pool_size": 65535, 00:39:56.374 "bdev_io_cache_size": 256, 00:39:56.374 "bdev_auto_examine": true, 00:39:56.374 "iobuf_small_cache_size": 128, 00:39:56.374 "iobuf_large_cache_size": 16 00:39:56.374 } 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "method": "bdev_raid_set_options", 00:39:56.374 "params": { 00:39:56.374 "process_window_size_kb": 1024 00:39:56.374 } 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "method": "bdev_iscsi_set_options", 00:39:56.374 "params": { 00:39:56.374 "timeout_sec": 30 00:39:56.374 } 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "method": "bdev_nvme_set_options", 00:39:56.374 "params": { 00:39:56.374 "action_on_timeout": "none", 00:39:56.374 "timeout_us": 0, 00:39:56.374 "timeout_admin_us": 0, 00:39:56.374 "keep_alive_timeout_ms": 10000, 00:39:56.374 "arbitration_burst": 0, 00:39:56.374 "low_priority_weight": 0, 00:39:56.374 "medium_priority_weight": 0, 00:39:56.374 "high_priority_weight": 0, 00:39:56.374 "nvme_adminq_poll_period_us": 10000, 00:39:56.374 "nvme_ioq_poll_period_us": 0, 00:39:56.374 "io_queue_requests": 512, 00:39:56.374 "delay_cmd_submit": true, 00:39:56.374 "transport_retry_count": 4, 00:39:56.374 "bdev_retry_count": 3, 00:39:56.374 "transport_ack_timeout": 0, 00:39:56.374 "ctrlr_loss_timeout_sec": 0, 00:39:56.374 "reconnect_delay_sec": 0, 00:39:56.374 "fast_io_fail_timeout_sec": 0, 00:39:56.374 "disable_auto_failback": false, 00:39:56.374 "generate_uuids": false, 00:39:56.374 "transport_tos": 0, 00:39:56.374 "nvme_error_stat": false, 00:39:56.374 "rdma_srq_size": 0, 00:39:56.374 "io_path_stat": false, 00:39:56.374 "allow_accel_sequence": false, 00:39:56.374 "rdma_max_cq_size": 0, 00:39:56.374 "rdma_cm_event_timeout_ms": 0, 00:39:56.374 "dhchap_digests": [ 00:39:56.374 "sha256", 00:39:56.374 "sha384", 00:39:56.374 "sha512" 00:39:56.374 ], 00:39:56.374 "dhchap_dhgroups": [ 00:39:56.374 "null", 00:39:56.374 "ffdhe2048", 00:39:56.374 "ffdhe3072", 00:39:56.374 "ffdhe4096", 00:39:56.374 "ffdhe6144", 00:39:56.374 "ffdhe8192" 00:39:56.374 ] 00:39:56.374 } 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "method": "bdev_nvme_attach_controller", 00:39:56.374 "params": { 00:39:56.374 "name": "nvme0", 00:39:56.374 "trtype": "TCP", 00:39:56.374 "adrfam": "IPv4", 00:39:56.374 "traddr": "127.0.0.1", 00:39:56.374 "trsvcid": "4420", 00:39:56.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.374 "prchk_reftag": false, 00:39:56.374 "prchk_guard": false, 00:39:56.374 "ctrlr_loss_timeout_sec": 0, 00:39:56.374 "reconnect_delay_sec": 0, 00:39:56.374 "fast_io_fail_timeout_sec": 0, 00:39:56.374 "psk": "key0", 00:39:56.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:56.374 "hdgst": false, 00:39:56.374 "ddgst": false 00:39:56.374 } 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "method": "bdev_nvme_set_hotplug", 00:39:56.374 "params": { 00:39:56.374 "period_us": 100000, 00:39:56.374 "enable": false 00:39:56.374 } 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "method": "bdev_wait_for_examine" 00:39:56.374 } 00:39:56.374 ] 00:39:56.374 }, 00:39:56.374 { 00:39:56.374 "subsystem": "nbd", 00:39:56.374 "config": [] 00:39:56.374 } 00:39:56.374 ] 00:39:56.374 }' 00:39:56.374 [2024-05-15 10:33:41.977213] Starting SPDK v24.05-pre git sha1 4506c0c36 / DPDK 23.11.0 initialization... 00:39:56.374 [2024-05-15 10:33:41.977304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139240 ] 00:39:56.374 EAL: No free 2048 kB hugepages reported on node 1 00:39:56.374 [2024-05-15 10:33:42.055375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.374 [2024-05-15 10:33:42.083002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.636 [2024-05-15 10:33:42.211203] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:57.209 10:33:42 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:39:57.209 10:33:42 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:39:57.209 10:33:42 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:39:57.209 10:33:42 keyring_file -- keyring/file.sh@120 -- # jq length 00:39:57.209 10:33:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.209 10:33:42 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:39:57.209 10:33:42 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:39:57.209 10:33:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:57.209 10:33:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:57.209 10:33:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:57.209 10:33:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:57.209 10:33:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.477 10:33:43 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:57.477 10:33:43 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:39:57.477 10:33:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:57.477 10:33:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:57.477 10:33:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:57.477 10:33:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:57.477 10:33:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:57.477 10:33:43 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:39:57.477 10:33:43 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:39:57.477 10:33:43 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:39:57.477 10:33:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:57.740 10:33:43 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:39:57.740 10:33:43 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:57.740 10:33:43 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FRrZd1HMIi /tmp/tmp.WZ56uWgYFu 00:39:57.740 10:33:43 keyring_file -- keyring/file.sh@20 -- # killprocess 3139240 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 3139240 ']' 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@951 -- # kill -0 3139240 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@952 -- # uname 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3139240 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3139240' 00:39:57.740 killing process with pid 3139240 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@966 -- # kill 3139240 00:39:57.740 Received shutdown signal, test time was about 1.000000 seconds 00:39:57.740 00:39:57.740 Latency(us) 00:39:57.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.740 =================================================================================================================== 00:39:57.740 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:57.740 10:33:43 keyring_file -- common/autotest_common.sh@971 -- # wait 3139240 00:39:58.002 10:33:43 keyring_file -- keyring/file.sh@21 -- # killprocess 3137598 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 3137598 ']' 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@951 -- # kill -0 3137598 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@952 -- # uname 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 3137598 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 3137598' 00:39:58.002 killing process with pid 3137598 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@966 -- # kill 3137598 00:39:58.002 [2024-05-15 10:33:43.604691] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:39:58.002 [2024-05-15 10:33:43.604731] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:39:58.002 10:33:43 keyring_file -- common/autotest_common.sh@971 -- # wait 3137598 00:39:58.264 00:39:58.264 real 0m10.507s 00:39:58.264 user 0m24.640s 00:39:58.264 sys 0m2.498s 00:39:58.264 10:33:43 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:39:58.264 10:33:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:58.264 ************************************ 00:39:58.264 END TEST keyring_file 00:39:58.264 ************************************ 00:39:58.264 10:33:43 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:39:58.264 10:33:43 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:39:58.264 10:33:43 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:39:58.264 10:33:43 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:39:58.264 10:33:43 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:39:58.264 10:33:43 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:39:58.264 10:33:43 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:39:58.264 10:33:43 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:39:58.264 10:33:43 -- common/autotest_common.sh@721 -- # xtrace_disable 00:39:58.264 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:39:58.264 10:33:43 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:39:58.264 10:33:43 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:39:58.264 10:33:43 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:39:58.264 10:33:43 -- common/autotest_common.sh@10 -- # set +x 00:40:06.416 INFO: APP EXITING 00:40:06.416 INFO: killing all VMs 00:40:06.416 INFO: killing vhost app 00:40:06.416 WARN: no vhost pid file found 00:40:06.416 INFO: EXIT DONE 00:40:08.397 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:65:00.0 (144d a80a): Already using the nvme driver 00:40:08.397 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:40:08.397 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:40:12.612 Cleaning 00:40:12.612 Removing: /var/run/dpdk/spdk0/config 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:12.612 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:12.612 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:12.612 Removing: /var/run/dpdk/spdk1/config 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:12.612 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:12.612 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:12.612 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:12.612 Removing: /var/run/dpdk/spdk2/config 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:12.612 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:12.612 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:12.612 Removing: /var/run/dpdk/spdk3/config 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:12.612 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:12.612 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:12.612 Removing: /var/run/dpdk/spdk4/config 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:12.612 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:12.612 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:12.612 Removing: /dev/shm/bdev_svc_trace.1 00:40:12.612 Removing: /dev/shm/nvmf_trace.0 00:40:12.612 Removing: /dev/shm/spdk_tgt_trace.pid2584896 00:40:12.612 Removing: /var/run/dpdk/spdk0 00:40:12.612 Removing: /var/run/dpdk/spdk1 00:40:12.612 Removing: /var/run/dpdk/spdk2 00:40:12.612 Removing: /var/run/dpdk/spdk3 00:40:12.613 Removing: /var/run/dpdk/spdk4 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2583337 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2584896 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2585449 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2586629 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2586796 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2587856 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2588188 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2588313 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2589438 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2589980 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2590292 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2590668 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2591076 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2591463 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2591765 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2591894 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2592235 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2593381 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2596646 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2596970 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2597289 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2597610 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2597983 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2597997 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2598607 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2598704 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2599058 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2599079 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2599433 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2599484 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2600059 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2600239 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2600632 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2601002 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2601025 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2601118 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2601438 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2601788 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2602144 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2602322 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2602528 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2602883 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2603271 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2603628 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2603794 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2604080 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2604429 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2604779 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2605332 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2605649 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2605979 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2606326 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2606687 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2606875 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2607072 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2607427 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2607548 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2607904 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2612366 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2709348 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2714489 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2726172 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2732574 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2737826 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2738593 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2752769 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2752850 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2753862 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2754868 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2755911 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2756509 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2756658 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2756858 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2757103 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2757113 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2758113 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2759118 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2760126 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2760793 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2760801 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2761131 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2762313 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2763631 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2773696 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2774155 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2779023 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2786035 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2789004 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2801861 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2812510 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2814532 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2815542 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2835808 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2840476 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2870719 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2876034 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2877918 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2880098 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2880128 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2880149 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2880382 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2880851 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2882869 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2883751 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2884311 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2886754 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2887472 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2888299 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2893720 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2900284 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2906065 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2949987 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2954797 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2962003 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2963503 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2965202 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2970339 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2974902 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2983976 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2983981 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2989482 00:40:12.613 Removing: /var/run/dpdk/spdk_pid2989797 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2989900 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2990486 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2990496 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2991868 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2993863 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2995759 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2997559 00:40:12.875 Removing: /var/run/dpdk/spdk_pid2999540 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3001546 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3008571 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3009352 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3010258 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3011100 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3017256 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3020268 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3026602 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3033420 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3043153 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3051385 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3051424 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3073429 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3074106 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3074797 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3075481 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3076540 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3077218 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3077904 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3078671 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3084191 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3084524 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3091567 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3091800 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3094458 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3101569 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3101577 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3107442 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3109642 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3111980 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3113341 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3115652 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3117070 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3127000 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3127666 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3128314 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3131565 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3132212 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3132881 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3137598 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3137756 00:40:12.875 Removing: /var/run/dpdk/spdk_pid3139240 00:40:12.875 Clean 00:40:13.136 10:33:58 -- common/autotest_common.sh@1448 -- # return 0 00:40:13.136 10:33:58 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:40:13.136 10:33:58 -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:13.137 10:33:58 -- common/autotest_common.sh@10 -- # set +x 00:40:13.137 10:33:58 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:40:13.137 10:33:58 -- common/autotest_common.sh@727 -- # xtrace_disable 00:40:13.137 10:33:58 -- common/autotest_common.sh@10 -- # set +x 00:40:13.137 10:33:58 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:13.137 10:33:58 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:13.137 10:33:58 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:13.137 10:33:58 -- spdk/autotest.sh@387 -- # hash lcov 00:40:13.137 10:33:58 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:13.137 10:33:58 -- spdk/autotest.sh@389 -- # hostname 00:40:13.137 10:33:58 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:13.398 geninfo: WARNING: invalid characters removed from testname! 00:40:36.414 Cancelling nested steps due to timeout 00:40:36.417 Sending interrupt signal to process 00:40:39.994 10:34:22 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:39.994 10:34:25 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:41.384 10:34:27 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:41.959 Terminated 00:40:41.968 script returned exit code 143 00:40:41.971 [Pipeline] } 00:40:41.991 [Pipeline] // stage 00:40:41.997 [Pipeline] } 00:40:42.017 [Pipeline] // timeout 00:40:42.025 [Pipeline] } 00:40:42.028 Timeout has been exceeded 00:40:42.028 org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: c9f56d95-4876-47a8-9bd4-13368363fe2a 00:40:42.045 [Pipeline] // catchError 00:40:42.050 [Pipeline] } 00:40:42.066 [Pipeline] // wrap 00:40:42.071 [Pipeline] } 00:40:42.085 [Pipeline] // catchError 00:40:42.093 [Pipeline] stage 00:40:42.095 [Pipeline] { (Epilogue) 00:40:42.107 [Pipeline] catchError 00:40:42.109 [Pipeline] { 00:40:42.122 [Pipeline] echo 00:40:42.124 Cleanup processes 00:40:42.129 [Pipeline] sh 00:40:42.420 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:42.420 3151887 /usr/bin/perl /usr/bin/lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info /usr/* -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:42.420 3151898 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:42.437 [Pipeline] sh 00:40:42.727 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:42.727 ++ grep -v 'sudo pgrep' 00:40:42.727 ++ awk '{print $1}' 00:40:42.727 + sudo kill -9 3151887 00:40:42.742 [Pipeline] sh 00:40:43.032 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:55.359 [Pipeline] sh 00:40:55.648 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:55.648 Artifacts sizes are good 00:40:55.663 [Pipeline] archiveArtifacts 00:40:55.670 Archiving artifacts 00:40:55.920 [Pipeline] sh 00:40:56.207 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:56.222 [Pipeline] cleanWs 00:40:56.232 [WS-CLEANUP] Deleting project workspace... 00:40:56.232 [WS-CLEANUP] Deferred wipeout is used... 00:40:56.239 [WS-CLEANUP] done 00:40:56.241 [Pipeline] } 00:40:56.261 [Pipeline] // catchError 00:40:56.271 [Pipeline] echo 00:40:56.273 Tests finished with errors. Please check the logs for more info. 00:40:56.276 [Pipeline] echo 00:40:56.277 Execution node will be rebooted. 00:40:56.293 [Pipeline] build 00:40:56.295 Scheduling project: reset-job 00:40:56.305 [Pipeline] sh 00:40:56.594 + logger -p user.info -t JENKINS-CI 00:40:56.605 [Pipeline] } 00:40:56.621 [Pipeline] // stage 00:40:56.626 [Pipeline] } 00:40:56.643 [Pipeline] // node 00:40:56.649 [Pipeline] End of Pipeline 00:40:56.688 Finished: ABORTED